Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

.Net: Bug: gemini-1.5-flash function calling is not working getting 400 bad request error. #10223

Open
PraveenVerma17 opened this issue Jan 17, 2025 · 3 comments
Labels
bug Something isn't working .NET Issue or Pull requests regarding .NET code triage

Comments

@PraveenVerma17
Copy link

Describe the bug
kernel function call is not working properly with gemini-1.5-flash for below code

using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.ChatCompletion;
using Microsoft.SemanticKernel.Connectors.Google;
using SementicKernelDemo.Plugins.LightsPlugin;



string? modelID = Environment.GetEnvironmentVariable("MODEL_ID");
string? apiKey = Environment.GetEnvironmentVariable("API_KEY");

if (string.IsNullOrEmpty(modelID) || string.IsNullOrEmpty(apiKey))
    throw new Exception("Please set the MODEL_ID and API_KEY environment variables to your Google AI Gemini model ID and API key, respectively.");

// create kernel
#pragma warning disable SKEXP0070 
var kernel = Kernel.CreateBuilder()
                    .AddGoogleAIGeminiChatCompletion(modelID, apiKey, GoogleAIVersion.V1)
                    .Build();



// Add plugins
kernel.Plugins.AddFromType<LightsPlugin>("Lights");


//Enable Prompt Execution Settings 
GeminiPromptExecutionSettings geminiPromptExecutionSettings = new() {
    FunctionChoiceBehavior = FunctionChoiceBehavior.Auto(),
    ToolCallBehavior = GeminiToolCallBehavior.EnableKernelFunctions
};



var chatCompletionService = kernel.GetRequiredService<IChatCompletionService>();
var history = new ChatHistory();

string? userInput;
do
{
    Console.WriteLine("User >");
    userInput = Console.ReadLine();
    try
    {
        history.AddUserMessage(userInput);
        var result = await chatCompletionService.GetChatMessageContentAsync(
            history,
            executionSettings: geminiPromptExecutionSettings,
            kernel: kernel);

        Console.WriteLine("Assistant > " + result);
        history.AddMessage(result.Role, result.Content ?? string.Empty);
    }
    catch (Exception ex)
    {
        Console.WriteLine("Error: " + ex.Message);
    }
} while (userInput is not null);


#pragma warning restore SKEXP0070

To Reproduce
Steps to reproduce the behavior:
Execute the above code its failing when ToolCallBehavior = GeminiToolCallBehavior.EnableKernelFunctions or ToolCallBehavior = GeminiToolCallBehavior.AutoInvokeKernelFunctions is set. if i don't set it does not recognize kernel functions.

Expected behavior
It should recognize kernel functions and return the result back from my plugin

Screenshots

Platform

  • OS: Windows
  • IDE: Visual Studio
  • Language: C#
  • Source: microsoft.sementic.kernel and connector.google

Additional context

@PraveenVerma17 PraveenVerma17 added the bug Something isn't working label Jan 17, 2025
@markwallace-microsoft markwallace-microsoft added .NET Issue or Pull requests regarding .NET code triage labels Jan 17, 2025
@github-actions github-actions bot changed the title Bug: gemini-1.5-flash function calling is not working getting 400 bad request error. .Net: Bug: gemini-1.5-flash function calling is not working getting 400 bad request error. Jan 17, 2025
@thevivekm
Copy link

I am also facing the same issue with gemini-1.5-flash, it doesn't interact with the kernel function.

@thevivekm
Copy link

change GeminiPromptExecutionSettings to this

GeminiPromptExecutionSettings geminiAIPromptExecutionSettings = new()
{
    ToolCallBehavior = GeminiToolCallBehavior.AutoInvokeKernelFunctions
};

@PraveenVerma17
Copy link
Author

@thevivekm I changed to gemini-2.0-flash-exp model and then adding message to history if content is available. Otherwise next time onward LLM is giving 400 bad request.

`
Console.WriteLine("Assistant > " + result);

if (!string.IsNullOrEmpty(result?.Content))
history.AddMessage(result.Role, result.Content ?? string.Empty);
`

Now i am facing a different problem, prompts are calling native function but some how LLM is not recognizing them. I don't want to create separate native function. I was hoping when LLM executed native function it will understand the output.

here is my skprompt.txt and config.json

This is a list of songs {{Songs.get_list_of_songs}} and add them into chat history if not already available.
This is a recently played songs {{User.user_recently_played_songs}} and add them into chat history if not already available.

You are an AI assistant. The assistant is helpful, creative, clever, and very friendly.
Suggest a new song to user based on list of songs but not recently played.
while responding to user, you can use the following format:
"Hey, I have a song suggestion for you. How about listening to {title} by {artist}?"
If Suggested song must be from the available songs list. If no available list of song present, Please inform client.
If all songs are in recently played list then select random song from available song list.

{
"schema": 1,
"type": "completion",
"description": "Suggest a new song",
"execution_settings": {
"default": {
"max_tokens": 200,
"temperature": 0.7
}
}
}

I am new in Microsoft Semantic kernel and learning concepts.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working .NET Issue or Pull requests regarding .NET code triage
Projects
None yet
Development

No branches or pull requests

3 participants