.Guarantee compatibility with several platforms, including.NET 6.0,. Internet Platform 4.6.2, and.NET Criterion 2.0 as well as above.Lessen dependences to prevent version problems and also the necessity for tiing redirects.Translating Audio Information.Some of the key functionalities of the SDK is audio transcription. Creators can easily transcribe audio data asynchronously or even in real-time. Below is an instance of how to transcribe an audio documents:.making use of AssemblyAI.utilizing AssemblyAI.Transcripts.var customer = brand-new AssemblyAIClient(" YOUR_API_KEY").var records = wait for client.Transcripts.TranscribeAsync( brand-new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3". ).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).For neighborhood documents, identical code could be utilized to obtain transcription.wait for using var flow = new FileStream("./ nbc.mp3", FileMode.Open).var transcript = wait for client.Transcripts.TranscribeAsync(.flow,.brand-new TranscriptOptionalParams.LanguageCode = TranscriptLanguageCode.EnUs.).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).Real-Time Audio Transcription.The SDK also supports real-time audio transcription using Streaming Speech-to-Text. This function is especially useful for uses requiring prompt processing of audio records.making use of AssemblyAI.Realtime.wait for utilizing var transcriber = brand new RealtimeTranscriber( brand new RealtimeTranscriberOptions.ApiKey="YOUR_API_KEY",.SampleRate = 16_000. ).transcriber.PartialTranscriptReceived.Subscribe( transcript =>Console.WriteLine($" Limited: transcript.Text "). ).transcriber.FinalTranscriptReceived.Subscribe( records =>Console.WriteLine($" Last: transcript.Text "). ).await transcriber.ConnectAsync().// Pseudocode for acquiring sound from a mic for example.GetAudio( async (chunk) => wait for transcriber.SendAudioAsync( chunk)).wait for transcriber.CloseAsync().Utilizing LeMUR for LLM Applications.The SDK includes with LeMUR to make it possible for designers to create large language design (LLM) applications on vocal data. Right here is an example:.var lemurTaskParams = brand-new LemurTaskParams.Motivate="Give a quick summary of the transcript.",.TranscriptIds = [transcript.Id],.FinalModel = LemurModel.AnthropicClaude3 _ 5_Sonnet..var reaction = wait for client.Lemur.TaskAsync( lemurTaskParams).Console.WriteLine( response.Response).Sound Cleverness Models.Additionally, the SDK includes built-in assistance for audio intellect designs, making it possible for belief evaluation and also other enhanced functions.var records = await client.Transcripts.TranscribeAsync( brand-new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3",.SentimentAnalysis = accurate. ).foreach (var cause transcript.SentimentAnalysisResults!).Console.WriteLine( result.Text).Console.WriteLine( result.Sentiment)// FAVORABLE, NEUTRAL, or NEGATIVE.Console.WriteLine( result.Confidence).Console.WriteLine($" Timestamp: result.Start - result.End ").To learn more, check out the main AssemblyAI blog.Image source: Shutterstock.