Turns out #GPT3 can do vision too 😉 Built an ingredient parser: take a pic of any nutrition label (google to extract text), and GPT-3 will identify ingredients, find an emoji, determine if it's unhealthy, and give a definition 🤯

Jul 19, 2020 · 10:04 PM UTC

to be clear, I'm not pulling from any database. And I "trained" GPT-3 using just *1* simple example
btw @sh_rey is right: much of the work here was just figuring out the right prompt (this took a while 😅)
Replying to @sh_reya
GPT-3 is a great example of the “garbage-in-garbage-out” principle. If you prime poorly, you get shitty results. But since the models are probably trained on basically every piece of data on the Internet, chances are if you prime well, you’ll get intelligent outputs. (8/11)
Replying to @lawderpaul @gdb
Hmmm, GPT-3 use case is not really when you can just hardcode the list of ingredients of interests.
agreed, not to mention the info would change slightly every time 😂 just wanted to test its ability to parse text (& threw in a couple other qs while I was at it)
Replying to @lawderpaul @gdb
How do you feed the image to it? Send the bitmap as 1's and 0's with spaces?
Replying to @lawderpaul
Too slow no? Is there a skip "interpreting" button? I want AI to do what I don't wanna wait for.
Replying to @lawderpaul
I thought GPT3 was doing images too but then I read your tweet carefully, Google is doing the text extraction thing and then GPT3 processes the ingredients and classifies as healthy or unhealthy
Replying to @lawderpaul
How "truthful" are these results though? For all we know, these results might be spreading misinformation about nutrition if the model was trained with text from homeopathy websites rather than primary sources