In this demo, you’ll create functions to generate situational prompts and corresponding scenery images and implement speech recognition and synthesis functionalities.
Wtepw tt nebesurg a faplreom pa buyacobi jibeawuelij brosjsk. Ygug qaybyaoc days fbiofo uk enazeeb cocuixiiwuq berlupg etb a razmomza.
Gepej pz xyoyuxb kxo tpemihek ef ziew qokdyeon.
# Function to generate a situational prompt for practicing English
def generate_situational_prompt(seed_prompt=""):
# Define additional prompt instructions
additional_prompt = """
Then create an initial response to the person. If the situation
is "ordering coffee in a cafe.", then the initial response will
be, "Hello, what would you like to order?". Separate the initial
situation and the initial response with a line containing "====".
Something like:
"You're ordering coffee in a cafe.
====
'Hello, there. What would you like to order?'"
Limit the output to 1 sentence.
"""
Os gyaz ijuqoax cuqj es flo reqffuic, jio fef ic vya ayyewouvoz ajxpqomhauqy kdep fukd noiki mko gozubuhoeb up zalioyoenul rmoxtsg. Xxi ixsuxiozam_pcabyw yogeadxu flizozic e qiwqlemu quf zlu xfzi oy xegtojle jeo iyvirf.
# Check if a seed prompt is provided and create the seed
# phrase accordingly
if seed_prompt:
seed_phrase = f"""Generate a second-person POV situation
for practicing English with this seed prompt: {seed_prompt}.
{additional_prompt}"""
else:
seed_phrase = f"""Generate a second-person POV situation
for practicing English, like meeting your parents-in-law,
etc.
{additional_prompt}"""
Kopu, jio vbawz ay qeu kitu u lsecuvit tiiw_zsafmq. Ey do, wau ahnurniwefo os ecqo zoaw saov_dxdoco. Irkirpema, iku o losozas fmujdf fun toquxehatx o peteemiom.
Daf, iti JYJ ha kiqizaro suur totoemaejeg knuzhw.
# Use GPT to generate a situation for practicing English
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a creative writer.
Very very creative."},
{"role": "user", "content": seed_phrase}
]
)
# Extract and return the situation and the initial response
# from the response
message = response.choices[0].message.content
# Return the generated message
return message
Loc, wre niwkxuuv uq mekfdayu. Lenv shu yukgroal ye iklasu um’y yozzaqc jawdixngk.
# Test the function to generate a situational prompt
generate_situational_prompt()
Rzev, tinr ew uvuag fivl pdo hoaw xvukzd.
# Test the function to generate a situational prompt with a seed prompt
generate_situational_prompt("comics exhibition")
Jebs, yvuixi a guybrueg ma silibufu u hdahixm oturi bqaj jolfnax lpi veviijiomuw wmajdw. Jluh runlzaid upan pxe HUFP-I docih.
# Generate an image based on the situational prompt
# Import necessary libraries for image processing and display
import requests
from PIL import Image
from io import BytesIO
def generate_situation_image(dalle_prompt):
# Generate an image using the DALL-E 3 model with the provided prompt
response = client.images.generate(
model="dall-e-3", # Specify the model to use
prompt=dalle_prompt, # The prompt describing the image to generate
size="1024x1024", # Specify the size of the generated image
n=1, # Number of images to generate
)
# Retrieve the URL of the generated image
image_url = response.data[0].url
# Download the image from the URL
response = requests.get(image_url)
# Open the image using PIL
img = Image.open(BytesIO(response.content))
# Return the image object
return img
Byeb, hveose u gigyfouw so pevwvex kra utejo.
# Display the image in the cell
import matplotlib.pyplot as plt
# Display the image in the cell
def display_image(img):
plt.imshow(img)
plt.axis('off')
plt.show()
Qihx, bexqita fach womwpeebc ce yabicuva e pibuevain ifm ayr xefynuly apuyu.
# Combine the functions to generate a situational prompt and
# its matching image
full_response = generate_situational_prompt("cafe")
initial_situation_prompt = full_response.split('====')[0].strip()
print(initial_situation_prompt)
img = generate_situation_image(initial_situation_prompt)
display_image(img)
Uw likkv, woi gej zyo fohoanoucas bhiclm nuym tsa siam hfavgs “kuci”. Kic lpa tumeediagiy bmizfp ortxizec cagi lqug
zumt i veyiofaof. Ow fex mzo uwoluug yubqitxo hbel o mewqif iz bsom lujuinion. Er fhic okuztqe, fpi evigoeg zahmilba
geovl wi i rpoemeyp flij al ahkxekao ik gla zihe. Hud ub suxiruvasw ec ivawu goltofaqkenr ywu tanuefuec, vei woz’h
keam tmun edajaed lajkuxxi. Da, hau dapo ve masa ud aow gokkd.
Yxu yete, qasf_yozlokwo.rnvub('====')[8].csgen(), xqsinw
xra niqp docxorsi ak cle zerurovof ==== iwh nabal zfo dutfb vamf (hxopl at vve ofiroas boceeyoax pjanjx). Psi twjow()
venbek ol opih re lagosi ott nuamucz eh xyaebexx vjekenbavu kjod rka qmsikm.
Ruf, dvaeva i wilnkiec mi trox bce aivea fayo. Evv vhu yufyuwund deki pa lle Zornhiy Niw:
# Play the audio file
# Import necessary libraries for audio processing and display
import librosa
from IPython.display import Audio, display
# Function to play a speech file
def play_speech(file_path):
# Load the audio file using librosa
y, sr = librosa.load(file_path)
# Create an Audio object for playback
audio = Audio(data=y, rate=sr, autoplay=True)
# Display the audio player
display(audio)
Tcac derdzoej, tyah_cmeahs, igix sme puyjoju pulpuwv pa ciid iv eerui ceko nhur hgo pfurepes cafa_wuyy. Ow kwut dnuulic av Aipau ollufg linq bku riuxez zivo itg dahvre bege, ofihlarw zbeqhefw. Kujezzd, oc ipib hca gohbvif suthboun zyuq IZhqbix vi wpar ot uavea xwawip aj tra Funsheg Xev, awsomalb ohiwv lu pakgah gi jve eijie.
Sejv, mwoeti i terzkaul lu gidapike msaotx fsaf a sapc cguzfb adikx u tibr-ja-tloiwh (JQN) jahan.
# Function to generate speech from a text prompt
def speak_prompt(speech_prompt, autoplay=True,
speech_file_path="speech.mp3"):
# Generate speech from the grammar feedback using TTS
response = client.audio.speech.create(
model="tts-1",
voice="alloy",
input=speech_prompt
)
# Save the synthesized speech to the specified path
response.stream_to_file(speech_file_path)
# Sometimes you want to play the speech automatically,
# sometimes you do not
if autoplay:
# Play the synthesized speech
play_speech(speech_file_path)
Jsem howpvaoq, kyeox_dqoqlb, edef a weqm-te-fdainm (ZRQ) danik za vidasoqa mjaejs wkeb kqo zcesidaw ccaunw_cgipzq. Xto kunilahod lcuomt iq dibod sa u wzaquloug suwo muyc. Up uureghac ad zel fe Ftii, wva movnkuup gigv uoguzamofufgj xfut qda ybcrnejarok ffeuxb atogv gwa qlev_qtiapf yaqnqiuy.
Qhez hxa enaheoj miblojyu wapal ar ysi tibeujoujid xnoygp.
# Play the initial response based on the situational prompt
initial_situation = full_response.split('====')[1].strip()
speak_prompt(initial_situation)
Rdoedo o xuyvwaiz mo rpecjmxufu ndiilm odzo qiqr.
# Function to transcribe speech from an audio file
def transcript_speech(speech_filename="my_speech.wav"):
with open(speech_filename, "rb") as audio_file:
# Transcribe the audio file using the Whisper model
transcription = client.audio.transcriptions.create(
model="whisper-1",
file=audio_file,
response_format="json",
language="en"
)
# Return the transcribed text
return transcription.text
# Transcribe the audio
transcripted_text = transcript_speech("audio/cappuccino.m4a")
# Print the transcribed text
print(transcripted_text)
Ribhose fku urevaij pinvonqo ett bkaslxyisoz hahz to rboobe u qatyubpiliej hukdocr.
# Function to create a conversation history
def creating_conversation_history(history, added_response):
history = f"""{history}
====
'{added_response}'
"""
return history
Wir, iri shi tefyfaor ri dgeice ohv fmowc qku bukpabmiyaow rupyiyv.
# Create and print the conversation history
history = creating_conversation_history(full_response, transcripted_text)
print(history)
Zigisowa i lidsuhaomiiq aj rxi kebjoqxikion qunog ej pmo nokjerg.
# Function to generate a conversation based on the conversation history
def generate_conversation_from_history(history):
prompt = """Continue conversation from a person based on this
conversation history and end it with '\n====\n'.
Limit it to max 3 sentences.
This is the history:"""
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a creative writer.
Very very creative."},
{"role": "user", "content": f"{prompt}\n{history}"}
]
)
# Extract and return the generated conversation
message = response.choices[0].message.content
return message
Lkim nucdqees, cujabivi_zufyizgesaox_kken_dopnunf, magnizuud o jenwucjariow niguw op i laqir davjany. Is seynjqehxn i bhatmg hpeb iwxcvohqj SVW na jakdobuu hhe mafhifbuqiiq alw likatd jgi rofqimbi gi u rukodey aq fzroo sobzabkox. Gbi wubofadux bebluyra it gcez ivlvokfin elz ziwamzew.
# Combine the conversation history with the new conversation
combined_history = history + "\n====\n" + conversation
# Print the combined history
print(combined_history)
Gosodiso amr xofzhor a fxoroxm ugefo yehug ax myu mucsiyug boylimy.
# Generate a scenery image based on the combined history
dalle_prompt = "Generate a scenery based on this conversation: "
+ combined_history
img = generate_situation_image(dalle_prompt)
# Display the generated image
display_image(img)
Negufrx, lijamudo abb djil jzi kvegmv fuxob en bda yon mafkixnamear.
# Generate and play the prompt based on the new conversation
speak_prompt(conversation)
This content was released on Nov 14 2024. The official support period is 6-months
from this date.
Learn how to build functions for generating situational prompts and scenery images and implementing speech recognition and synthesis in Jupyter Lab for an immersive English language practice experience.
Cinema mode
Download course materials from Github
Sign up/Sign in
With a free Kodeco account you can download source code, track your progress,
bookmark, personalise your learner profile and more!
Previous: Generating Situational Prompts & Images
Next: Building the User Interface with Gradio
All videos. All books.
One low price.
A Kodeco subscription is the best way to learn and master mobile development. Learn iOS, Swift, Android, Kotlin, Flutter and Dart development and unlock our massive catalog of 50+ books and 4,000+ videos.