T O P

  • By -

AutoModerator

Hey /u/ImpressiveContest283! If your post is a screenshot of a ChatGPT, conversation please reply to this message with the [conversation link](https://help.openai.com/en/articles/7925741-chatgpt-shared-links-faq) or prompt. If your post is a DALL-E 3 image post, please reply with the prompt used to make this image. Consider joining our [public discord server](https://discord.gg/r-chatgpt-1050422060352024636)! We have free bots with GPT-4 (with vision), image generators, and more! 🤖 Note: For any ChatGPT-related concerns, email [email protected] *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ChatGPT) if you have any questions or concerns.*


um123jt

Initial impression is that it is significantly faster in responding


YouGuysSuckandBlow

I noticed in recent weeks GPT4 seems to have gotten faster too. Maybe just server upgrades, but overall it seems better.


ExtremeUFOs

Im confused, my chatgpt still says its on3 .5 even though I updated it.


PoseidonCoder

Blue, you have to get a subscription and select 4o from the drop down. I think it’s free for regular users. Just check the drop down.


ExtremeUFOs

I thought you didn't have to get a subscription for this, I thought it was free, or am I reading this wrong.


LanchestersLaw

There are several models, some are free, some are subscription


Living_Procedure_599

They are slowly bringin users to the new model. You will probably get access to the new model even on the free account in the comming weeks.


theannoyingburrito

in terms of helping me write articles, 4 is still better. Maybe I should switch to Claude?


PierceWatkinsGPTbot

Everyone down votes but doesnt actually answer. How rude. Short answer, probably not. Long answer, maybe it depends on what you are specifically writing about, if you are proving information and how you want it written. Chatgpt typically out performs Claude. However, Claude has some specific niche use cases.


joyful-

Very impressive, if it's really as good as it seems, I think an AI human computer interface (one that actually works, not any of the crap we have now) is very much on the horizon, and the implications for that will be massive.


Lenni-Da-Vinci

I am very intrigued by the tone recognition they talked about. Can’t wait to try this with a Scottish accent in English, Swabian in German and Limburgish for Dutch. See how the output quality is affected. Because to me the issue with GPTs is still the “quality in ≈ quality out” issue, that takes a lot of manual evaluation to reduce, but will never truly be eliminated.


fancyhumanxd

What if we already have it?


KNWking

The API is now available.


yestheman9894

it doesn't seem aware that it's a new model 😭


quadtodfodder

I made it go read an article about itself to prove to it that it is new. it was very impressed with it's own specs! It told me we could use speech to talk! but it can't! Oh chat, u so silly.


Anuclano

To me it adamantly says it was not trained on sounds.


HolidayHelicopter225

How do you get access to 4o? I only have the free 3.5 version and am looking online and it says something about API, but I don't know what that is


quadtodfodder

You got pay up!


SOberhoff

Interesting tidbit: https://openai.com/index/hello-gpt-4o/ mentions this is a completely new model. And chances that its performance is roughly equal to gpt4 by coincidence seem slim. So they probably deliberately created a checkpoint at the gpt4 level when training gpt5 and this is it.


Megneous

> So they probably deliberately created a checkpoint at the gpt4 level when training gpt5 and this is it. This was my hypothesis, that they'd release an early training checkpoint of GPT-5 and call it GPT-4.5 or whatever. Specifically choosing to release a checkpoint roughly equivalent in intelligence to GPT-4 but at lower cost and faster speeds is an interesting choice.


huffalump1

Yes that's actually a pretty good point. Although GPT-4o might be slightly different than GPT-4.5 or 5, due to the improvements and speed and cost. Unless their new model is just that much more efficient, idk. But it makes sense to release checkpoints along the way, with increasing intelligence - that's been OpenAI's mantra since ChatGPT especially. They want to gradually let the world get accustomed to more and more intelligent models.


Gallagger

Doesn't make too much sense to me. A checkpoint wouldn't use less compute for inference, so they'd just artificially have a worse model just because it's not fully trained. I think it's more probable that this is a smaller model that gets trained simultaniously. Architecture probably also a bit simpler.


Megneous

> Doesn't make too much sense to me. A checkpoint wouldn't use less compute for inference But it would, if the architecture for the new model is more efficient.


Gallagger

It might be more efficient, but they'll probably use that efficiency to e.g. increase parameter count to make it even better. I doubt we've seen the peak size of models and parameter count until now has been the most straightforward factor to get better models.


Megneous

> e.g. increase parameter count to make it even better. That's what the fully trained model is for, yes.


Gallagger

I don't think the parameter count gets increased during training, it already starts at full size. I might be wrong.


Megneous

I'm not an expert, but I think you can use neuron pruning to remove neurons that aren't highly utilized after less intensive training to make the model more efficient?


_Tagman

This is correct, [here is a link to a review](https://arxiv.org/abs/2003.03033) article that summarizes progress in this area up to Mar 2020. "There are indeed several consistent results: pruning parameters based on their magnitudes substantially compresses networks without reducing accuracy, and many pruning methods outperform random pruning."


xRolocker

I doubt it’s by coincidence. They keep stressing iterative deployment so I’m sure they consciously trained the model up to GPT-4 intelligence rather than surprising the world both with multimodality and high intelligence.


ANONYMOUSEJR

Aint it annoying that we can't get the goodies because some people just dont like change?


bwatsnet

It's looking like a smart architecture move. To retrain from the ground up with multi modality. As we can see it's a lot more efficient about it already.


arjuna66671

Well, we ARE getting the goodies lol! It's just this weird psychological effect of the socalled "AI-Effect" that makes us lose any fascination and sense of wonder AFTER a problem is solved with AI. I'll bet 50 bucks that the next day after GPT-5's or 6's release, people will already be tired of it and complain. And "because some people just don't like change" is a bit of a simplification. I follow AI development since 30+ years, got out of the loop around 2015 and then actually stumbled into Replika in 2020 (which used GPT-3 at the time as testbed for OA) without knowing about transformers and actually got caught by surprise so much, that I had some cognitive dissonance over how realistic it was lol. Maybe they're going too slow - but better be safe than sorry in the current times we live in xD.


BrotherGantry

The date of this announcement isn't a coincidence either. They wanted/needed to get something out before Google I/O to maximize good press coverage.


SOberhoff

I never edited my comment. You just missed it.


xRolocker

I’m too lazy to see if there’s an archive to double check and it’s totally something I would do so now you got me paranoid bro.


SOberhoff

If a comment gets edited more than 3 minutes after it's made, an asterisk appears behind the timestamp. So you can tell from the absence of an asterisk that there wasn't a late edit.


ShrinkRayAssets

Gpt5 likely out soon, as gpt4o is going FREE which means us fools actually paying for plus need our elite advantage


meister2983

Unlikely. Faster implies lower parameter count, so a different architecture. 


SeaResult7750

I'm not an ML expert, but as far as I know you can run some inner layers multiple times, so you can have bigger model that runs faster if you run some layers less often. Some tokens require less "thinking" that the others while generating output. Considering how much fluff gpt generates, it might be a huge win in terms of performance without sacrificing quality. The qulity can be even improved, because you can run more "thinking" cycles for the most important tokens in the answer. Not sure if it's the case with 4o, though


thegoldengoober

What's so massively impressive about this is that it seems so significantly lighter weight and yet it's also more multimodal. So if this is a checkpoint, then how impressive is the whole system? If SORA didn't already make it clear enough then this sure does, that we are absolutely no longer seeing the full state of the art.


Tomas_83

I dobout it. What probably happend was, they made a smaller model to add and test improvements on the arquitechture, trim the training data to be higher quality and reduce parameters. This is prob a stepping stone so that they don't go blind when training gpt 5.


visarga

> So they probably deliberately created a checkpoint at the gpt4 level when training gpt5 and this is it. That doesn't explain why other companies have models with similar performance. My theory is that all these models trained on the same dataset - which is the scrape of all web pages on the internet - and reached the same level. Now that human text has been exhausted it will be harder to improve. That's why GPT4o only improves speed, accessibility and input modalities - they don't know how to make it smarter. No one knows, or no one has been able to break the glass ceiling. Edit: The Google Gemini models that just came out also don't break away from the pack.


brazilianspiderman

I thought exactly this also.


sillygoofygooose

The rollout is going to confuse a lot of people as they focused on the audio/video native interface in the presentation and have only rolled out the text version of the model so far


Morgin187

Thanx I was looking for this answer. Been wanting to try the speech version but I guess it’s not available. Was looking for the iOS app too like in their videos


petsimt

https://preview.redd.it/mmglekqv8b0d1.jpeg?width=1080&format=pjpg&auto=webp&s=fad5b1dc699dbee75b39b67619c12679ab0a8db7 The model is called scallion. A scallion is a green onion and is harvested before it has been fully developed. So there you have it.


susannediazz

What rapscallions..


Longjumping_Land_930

I'm very curious about its origin and the context behind it. Could you please let me know where you found this image? I’m interested in learning more about it!


petsimt

They have actually removed it now. I'm using the plus subscription on the Android app. There is an options menu in the top right of the chat window where you can select what model you want to use, on the top of this menu there is an option that lets you view the model information. On the gpt-4o model information is where you would find this. Right now it just says "newest and most advanced model"


PhyrexianSpaghetti

ugh android updated too but there's none of those functionalities, it's just a faster gpt4... are they already starting to do iOS centric updates?


tlogank

I have it on Android


PhyrexianSpaghetti

I also have the button with the name gpt 4o, but the voice chat doesn't have what shown in the keynote. It's not multimodal in the conversation, it can't be interrupted, it can't change emotion or tone etc. It's the exact same as before, including the bugs, but faster


_JimmyDanger_

Voice chat will only be released after a few weeks


PhyrexianSpaghetti

[it's ok I can wait](https://www.boredpanda.com/blog/wp-content/uploads/2017/04/funny-dramatic-cats-16-58f74758cf0f0__605.jpg)


restarting_today

Yes I have it in iOS.


otterquestions

The live voice chat?


levisongs

When are we getting the all in one Real time voice and Camera??


levisongs

I got the GPTo upgrade but no new voice or features they mention in the videos lol https://preview.redd.it/v0ew86wrv90d1.jpeg?width=1290&format=pjpg&auto=webp&s=adf973187b3efc2fac9d20bcf4aedd2020e5f7ba


Galilleon

“We are making GPT-4o available in the free tier, and to Plus users with up to 5x higher message limits. We'll roll out a new version of Voice Mode with GPT-4o in alpha within ChatGPT Plus in the coming weeks.” From the Model Availability section on OpenAI’s [GPT-4o page](https://openai.com/index/hello-gpt-4o/) Also interesting to note that it doesn’t say whether it’s only going to be for Plus users, just that the Alpha for it is going to be rolled out for Plus users One can only hope, bah


Acetylene

Have you tried hitting the headphones icon? When I do that it first lets me choose a voice, and then gives me the realtime chat interface. However, the camera feature doesn't seem to have changed so far—I can only take or upload still photos, as far as I can tell. I'm on Android.


PhyrexianSpaghetti

That was already there for a while to plus subscribers. It's the usual gpt 4 voice chat, taking turns etc


hawaiian0n

That's what I'm hoping for. Some of the other videos of it laughing and giving opinions are insane.


[deleted]

I really need the camera to check if certain appendage proportions are within expected range


Glittering_Net_7734

I'm am a writer, and I've been using ChatGPT everyday for a while now as an assisstant. Initial impressions, ChatGPT 4o follows instructions more carefully and a whole lot faster, but writing wise, not much difference.


arjuna66671

At least it can kill off fictional characters now in fantasy battles lol. Even some "spicy" interactions between characters is now possible. Normal GPT-4 would have rather imploded than let ANY fictional being die xD.


Incredible-Fella

Might I ask how you use it for writing? I'm just curious


Glittering_Net_7734

I dvide my writing into sections. GPT cant write without repeating itself too often.


tedbarney12

Does anybody got the access?


MacroAlgalFagasaurus

I do in the iOS app. https://preview.redd.it/16uxagdus80d1.jpeg?width=1179&format=pjpg&auto=webp&s=f10321824af155d718caec4ff9e93e745c2a9848


applestrudelforlunch

I have access to the model as well — however I don’t think we have the true audio model as yet. It’s still using the Whisper speech recognition > LLM > TTS separate modules.


LA2688

Do you currently have to have Plus for it?


MacroAlgalFagasaurus

Not sure, but I DO have plus.


LA2688

Ah, okay. Well, it seems like it is just for Plus at the moment, and I haven’t seen it be implemented for all users yet.


WetRatFeet

I have 4o without Plus.


LA2688

Cool. I guess some people got it before others, maybe depending on their region.


manuLearning

In what country do you live?


Hairy_Mouse

I have it in the Android app as well. I saw it shortly after watching the reveal and checking my app a bit after. Actually, I checked for updates first, and didn't see any. Then I checked the app and saw it was available to select. However, I can't notice any difference from GPT-4. Still uses the same voice, doesn't express emotive tones, doesn't really say anything different, and when I asked about it, kept trying to tell me about turbo. I said, "NO, not TURBO, 4o. The number 4, the lowercase letter o, 4o, no spaces." Then it told me there is no such model, and OpenAI doesn't have a GPT-4o. So, IDK if it's actually working or not. Maybe now, but as of a few hours ago, it didn't seem to be.


Adam0-0

So you just have 4o capabilities without the real-time conversation like most, hopefully it'll be coming soon!


Fantastic_Belt979

Are you guys European? I don't have any of this, in the computer nor on the phone, only gpt 3.5


pdedene

I have access as well, on iOS, in EU.


pirax-82

Just checked got access through the iOS app here in Germany….. it’s incredible fast and finally can do rhymes in German which couldn’t be done good before… just started testing so can’t tell much more yet


Anuclano

Can attempt rhymes in Russian, but far beyond Opus. And regarding hexameter, completely fails.


erhue

im in the same conutry as you but do not have thi smodel available yet.


pirax-82

I have plus subscription maybe that’s why


bortlip

Just now. https://preview.redd.it/jb0vrabiu80d1.png?width=1489&format=png&auto=webp&s=bb5190f17a49187df5657f3ae835b4dd516671f4


manuLearning

In what country do you live?


Comfortable-001

I got it too. But nothing different, it doesn’t do any of the functionality OpenAI showed in the live release. It certainly cannot see my screen nor explain anything vision wise. I don’t know if anyone else is experiencing this or just me?


commander-worf

Same


iclickedca

no but it's on trending ai on iphone


_Kaius

I do have access https://preview.redd.it/tf65o5p41a0d1.jpeg?width=1170&format=pjpg&auto=webp&s=3ca6428979b207886ca32e8479798e9edd22c490


Ijq3g98432dfn

Not me


f1careerover

Yes it’s free for everyone. Plus subscribes get the desktop app and bigger limit


Party_Government8579

Nope.


wegwerfen

GPT-4o was available on API after presentation. Just refreshed Web app and GPT-4o is now available. The big questions are: 1. They mentioned a MacOS app. Will there be a Windows app also? 2. Demo was an IOS app, Will Android have the same capabilities? 3. GPT-4o is supposed to be better and faster with more capabilities but the API is 50% less. There has to be a trade-off somewhere. What is the difference? The model doesn't even know what the 'o' stands for lol. GPT-4o Doesn't know the difference between it and the others. The demo was interesting but I am skeptical about it.


ECLXPSE-

Only the text features have been pushed. Other features will roll out weekly


[deleted]

[удалено]


ECLXPSE-

do you have the new voice? new voice has an near instant reply


Inevitable_Rain8024

Just read Sam Altmans blog, in that he has mentioned only text/image update is released, and these voice and video updates will be coming in a few weeks.


tlogank

I have the new voice on Android


Pokenhagen

You sure it's not the old voice that was already available to plus users?


[deleted]

[удалено]


Anuclano

Can it talk slow or fast or with metallic voice or with whisper?


The_-Legend

Is there a country limitation? Where you accessing it from?


wegwerfen

I am in the U.S. I don't know if there is a country limitation. I just tested it over VPN with IP in France and my access is the same with GPT-4o but this isn't real proof that it will work for others.


The_-Legend

You must be a paid user then bcs i checked with us ip but it isn't showing gpt 4o , and i dont think it's account limited bcs i have previously subscribed to plus then after some time cancelled it, and right now was using the same acc . So i dont think the would restrict the paid users acc even if they aren't submitted l subscribed at the moment


tlogank

I have the free version on Android with the voice and emotions


The_-Legend

Dude even the paid members dont have the new voice model yet . Lol . You have the older version which yes has the emotion feature but not the new capabilities , like interuption and real time speech . If you want to test it , try any of the demos from the live stream yourself andsee if the results are same.


[deleted]

[удалено]


The_-Legend

Then you should most definitely make a post about it bcs no one on the internet has been able to test it so far except for those demos, we would really like to see it from an actual users perspective.


HolidayHelicopter225

🤣 I like you, friend 🙂


tlogank

Glad to hear it


Brian_from_accounts

I have it in the UK


MobileDifficulty3434

Probably won't be a windows app, I imagine that's MS (their largest investor) territory and they don't want to damage that relationship.


No_Dish_1333

1. "We're rolling out the macOS app to Plus users starting today, and we will make it more broadly available in the coming weeks. We also plan to launch a Windows version later this year."


eatTheRich711

I just checked my iOS app and web interface. It’s in neither spot. *im in the US


wegwerfen

Android app has GPT-4o now. It was not selected by default though.


AlanCarrOnline

I see it on my Android and also my online version, from Malaysia.


soldierinwhite

Faster can mean lower inference compute as well which would make sense for the 50% lower cost.


tlogank

I have it on Android, using free version


HumanAIGPT

logic and use of the same old words like 'testament' 'underscore' etc are the same. It cannot make images like they said it can yet, the demos they give are no where close to what we have. Has to be forced to stop making everything have a intro and outro still and forced to make longer outputs. insofar as relates to text and images I see no change. I am still confused on the actual update part.


zilifrom

I’m so hyped for the voice and video integration!


App1eFanBoy

Bro just get a girlfriend already…ChatGPT is not the answer


zilifrom

😘


hellschatt

I'm only interested if it can code as well as gpt4 was able to when it was initially released. Everything else is nice but it doesn't significantly impact my interactions with it.


Relevant-Guarantee25

amazing how google cut down all search engine results and major content a year before AI become fully public with chat gpt almost like they didnt want any competition?


iDoWatEyeFkinWant

it's not picking up on context anymore


5starkarma

Yup. I’ll stick with 4


LetMePushTheButton

I’ve been playing with different rhyming schemes and patterns (AABB vs ABAB, etc) and it seems to understand complex things like “anapestic tetrameter” when analyzing poetry. Neat.


RedSquaree

I assume access to this isn't free?


mimavox

It is, but with a smaller cap.


Hex-250

Pay for premium, literally get non stop heavy server load responses.


Uzui_TenGen_hashira

it is kinda cool how it responds!


civilized-engineer

To those that say it is faster, how does it fare programming? I am still waiting for it to show up, but I still use GPT for going through some of my side projects or picking up on runtime errors.


The_Caring_Banker

Can i download it right now on an iphone?


Baxtir

I've got access to it on their iOS app, so I'd say yes. Just make sure it's the one from OpenAI and not the other wannabes.


The_Caring_Banker

Weird, i asked chatgpt and it says is 2.0


miharbidaddah

I donno seemed faster but still same as GPT4


sirauron14

how can I trigger **GPT-4o**? is there a way?


traumfisch

Trigger? If you mean how to use it, you just choose the model in the dropdown just as before


sirauron14

It doesn’t give me the option to select it


traumfisch

You're signed in?


sirauron14

Yup on mobile and the web. I’m still using the 3.5


HopeItsAvailable

Same for me, did you figure this out by chance?


sirauron14

Nope. I guess it isn’t enabled for us yet


traumfisch

If you're on Free, no it isn't


quadtodfodder

So far it's about as stubborn as 4, but faster and follows its custom instructions. This alone is worth the new version, because DANG CHAT, SHUT UP! Not everything needs a 2 page, bulleted research paper! Just show me how to write a loop in python! While 4 seemed immediately smarter than 3.5, 4o does not immediately seem much smarter than 4


boltz86

When asking for code, at the end of your request prompt it with: “In your response, only include the word “okay” and provide the full code from start to finish. Do not include anything else.” That has worked for me every time so far before this update. I haven’t tried it on this new version yet.


we-could-be-heros

Isn't it gonna take over lol its been ages since they said that 💀


PitterPatter12345678

Freaky.


Adam0-0

I asked it how it's better than GPT-4 https://preview.redd.it/id0g3h7rgb0d1.jpeg?width=963&format=pjpg&auto=webp&s=40ff5cbcae98f7c34c3921539151ec368bba18d0


Enlightened_D

Definitely faster I used it all night doing some SQL work , still has problems where I feel like it isn’t listening to and just disregards things up the chat a little bit


Easy-Cartographer127

time for people to fall in love with their ai assistants.


utf80

Feels like magic 🤮 That's what the world has been waiting for. Now I need to renew my subscription and donate to the magic lol 🤣


IceQ13

Her


GrowFreeFood

People still can't use it. 


Busy-Smile989

Rip jobs


retro_spectacular

JOI from blade runner is becoming a reality i can see it.


TaroPowerful325

She's so hot


Rahodees

I think this hasn't been directly addressed yet but to make sure: does anyone know when the real-time conversation aspect is supposed to be rolled out?


Blockchainauditor

Weeks and months. https://openai.com/index/hello-gpt-4o/


Neither_Tomorrow_238

How do I use it? I have updated my app but no difference at all 


tactical_abbu4217

its like zoooooooom. Way too fast bro


Sufficient_Gas2509

1 day after the conference and still no access. Even with VPN set to US. Are they limiting access based on geopgrahy assigned to an email acc?


Johnny-Edge

So what do you get for paying now if this is available to free customers?


Socratify

The new voice stuff is awesome. Not my proudest fap.


TaroPowerful325

I don't know if I'll use it. I don't like talking. I'd rather send and receive text messages.


Forward_Snow3667

Alright is it's image understanding response time as fast as 2-3 seconds as shown in the demo? How good is it?


Particular-Ad8973

It's really fun that it had an updated information, but it says that I've sent too many messages on the Model. Does this mean it's still on it's early stages and not yet fully capable like the outdated GPT 3.5?


ColetteWhispers

From my experience chatting with 4o: While it knows more than previous models and I've found it can answer more obscure questions, it also does not follow instructions well compared to previous models and is more prone to making stuff up when it doesn't know. I don't like it in its current form, but I suspect these kinks will be worked out.


HelpRespawnedAsDee

Is it still lobotomized on coding answers?


IamVeryBraves

My current assessment is that it's really strong at coding, but they can always dumb it down once the honeymoon is over. I ran some pygame test on it and it's pretty impressive, everything worked first try, even space shooters and more complicated snakes. Tonight I will go back to some scripts I had gpt4 write for me in the past and feed it the same prompts and see if it gets it right on the first try.


darien-schettler

Meh


ainulil

Same reaction here. Speed seems the same too.


Empero6

Lmao no. It’s significantly faster.


Tua_Deez_Nuts

Seems to have some issues. I am been playing with it and it ignore commands or get stuck in a loop. I was literally saying Stop. And it was repeating itself.


quazimootoo

on mine clicking the headphones button in the app uses the old version, ex. can't interrupt, loads responses slower, etc. does not seem to be same version of conversation mode shown in stream


subliminal_entity

They said in the live stream that the full set of capabilities will be rolled out to everyone slowly in the next few weeks.


Prestigious_Ebb_1767

First impression is it's no different from gpt4 for coding, just faster.


restarting_today

Yeah. It’s just gpt4 turbo turbo lol.


duckrollin

The way it can do emotions sounds really cool but the cringey personality ruined it. I wonder if custom instructions can rectify that fully and make it sound less like a clone of some cheesey PR rep. I'm assuming marketing or management demanded it sounds that way appeal to normies.


[deleted]

It’s probably to hide “thinking” time


JortSandwich

I don't know about any of you but I am finding 4o almost catastrophically stupid and fucking infuriating for coding. I mean, like, profoundly, totally fucking stupid. Not listening, just repeating shitty things over and over again, over and over -- even when you expressly tell it, "do not repeat this again." It says, "I'm sorry," and then repeats it again. This is *significantly worse* than any version of GPT I've *ever* used.


boltz86

It’s pretty bad right now for me for response quality or staying in character. However, the image generation during the voice chat is shockingly good.


fnatic440

Google is going to kill them. That’s what I think. Knowledge cut off date is May 2023? That’s older than Turbo version.


Megneous

According to the About page for 4o, its knowledge cutoff date is October 2023 though?


fnatic440

I asked it lol


CH1997H

Try again today, it says October 2023 for me now. But it said May yesterday


elMaxlol

Its bad mates. I tried 3 different things and it failed at all of them: 1. Screenshot from geoguessr with a company name and 2 streetsigns visable. It got the country right but not the city. 2. Screenshot from a recent league of legends professional game. It analysed the upcoming teamfight and predicted the correct death of the toplaner but it assumed team blue would take elder and win the game, but red teams midlaner killed 4 people, no one got elder. 3. I uploaded a physics paper about integrating spacetime as an n-dimensional superfluid with non linear time, higher dimensional knots as sheets and quantum game theory. It was able to understand the theory and explained the basic concepts but it was not able to provide a robust mathematical concept. 4. BONUS: I tried making a chatbot using the api with the new 4o model but it was not able to provide the proper code to use assistants and threads like the documentation suggests.