Tag Archives: gpt-3

Microsoft releases simple “auto-complete for programmers” that uses mammoth AI

Less than one year ago, a new language AI called GPT-3 hit the stage. By far the most powerful AI of its type, GPT-3 can write in different styles, answer complex questions and, surprisingly, even write bits of code. This last fact was not lost on programmers. Software development is a giant, half-a-trillion dollar industry, always on the rise and always adapting to emerging technology.

Microsoft purchased a license for GPT-3 a few months ago, and now, they’ve announced their first product based on the AI: a tool that will help users build apps without needing to know how to write computer code or formulas.

It’s not the first time advanced algorithms were used to make programming easier. Indeed, writing code has changed a lot since the early days of pure black text on a white background on a terminal. But companies are always looking for ways to make writing code easier, and therefore, more attractive to a lot of people.

In truth, Microsoft’s new tool won’t write the next big app for you, but it can take some of the lower-level bits of code and enable you to “write” them with a click of a button — something which we also pointed out as a possibility when covering the initial release of the AI, and which wasn’t lost on big tech.

“Using an advanced AI model like this can help our low-code tools become even more widely available to an even bigger audience by truly becoming what we call no code,” said Charles Lamanna, corporate vice president for Microsoft’s low-code application platform.

Microsoft has been working on this for a while with its suite of “low code, no code” software through its Power Platform. The idea is simple: users still have to understand the logic and structure behind the code they’re writing, but smart tools like this one can make the boring part of writing routine code much easier. It’s a bit like autocomplete for code: you still need to know what you’re writing, but it helps you when you can’t find the world you’re looking for.

This could also be useful for smaller companies that can’t afford to hire a lot of experienced programmers for things like analytics, data visualization, or workflow automation. In a sense, GPT-3 becomes a hired assistant for the company.

For instance, instead of having users learn how to address the database properly, they can just ask it what to do in layperson language, and GPT-3 then makes the translation. For instance, if you’d want to find products that start with “kids” on the Power Platform, you’d have to use a certain syntax, which sounds something like:

  • “Filter(‘BC Orders’ Left(‘Product Name’,4)=”Kids”)

With GPT-3, all you need to do is say:

  • “find products where the name starts with ‘kids’.”

It’s a simple trick, but it could save users a lot of time and resources, enabling people and smaller companies to build apps more rapidly, and with less effort. Since GPT-3 is such a powerful and capable language AI, there’s a good chance it will also understand more complex queries. It’s not all that different from the natural language query functions that are already available in software like Excel or Google Sheets, but GPT-3 is more sophisticated.

“GPT-3 is the most powerful natural language processing model that we have in the market, so for us to be able to use it to help our customers is tremendous,” said Bryony Wolf, Power Apps product marketing manager. “This is really the first time you’re seeing in a mainstream consumer product the ability for customers to have their natural language transformed into code.”

Programming languages are notoriously unforgiving, with small errors causing big headaches for even advanced users. Microsoft’s approach isn’t the first, but it has one big advantage: it’s extremely simple. The feature accelerates the trend of simplifying programming and cements Microsoft’s ambitions to dominate the landscape. But perhaps the most interesting part about this is how a breed of AI language models is starting to enter the world of programming.

This AI module can create stunning images out of any text input

A few months ago, researchers unveiled GPT-3 — the most advanced text-writing AI ever developed so far. The results were impressive: not only could the AI produce its own texts and mimic a given style, but it could even produce bits of simple code. Now, scientists at OpenAI which developed GPT-3, have added a new module to the mix.

“an armchair in the shape of an avocado”. Credit: OpenAI

Called DALL·E, a portmanteau of the artist Salvador Dalí and Pixar’s WALL·E, the module excerpts text with multiple characteristics, analyzes it, and then creates a picture of what it understands.

Take the example above, for instance. “An armchair in the shape of an avocado” is pretty descriptive, but can also be interpreted in several slightly different ways — the AI does just that. Sometimes it struggles to understand the meaning, but if you clarify it in more than one way it usually gets the job done, the researchers note in a blog post.

“We find that DALL·E can map the textures of various plants, animals, and other objects onto three-dimensional solids. As in the preceding visual, we find that repeating the caption with alternative phrasing improves the consistency of the results.”

Details about the module’s architecture have been scarce, but what we do know is that the operating principle is the same as with the text GPT-3. If the user types in a prompt for the text AI, say “Tell me a story about a white cat who jumps on a house”, it will produce a story of that nature. The same input a second time won’t produce the same thing, but a different version of the story. The same principle is used in the graphics AI. The user can get multiple variations of the same input, not just one. Remarkably, the AI is even capable of transmitting human activities and characteristics to other objects, such as a radish walking a dog or a lovestruck cup of boba.

“an illustration of a baby daikon radish in a tutu walking a dog”. Credit: OpenAi.
“a lovestruck cup of boba”. Image credits: OpenAI.

“We find it interesting how DALL·E adapts human body parts onto animals,” the researchers note. “For example, when asked to draw a daikon radish blowing its nose, sipping a latte, or riding a unicycle, DALL·E often draws the kerchief, hands, and feet in plausible locations.”

Perhaps the most striking thing about these images is how plausible they look. It’s not just dull representations of objects, the adaptations and novelties in the images seem to bear creativity as well. There’s an almost human ambiguity to the way it interprets the input as well. For instance, here are some images it produced when asked for “a collection of glasses sitting on a table”.

Image credits: OpenAI.

The system uses a body of information consisting of internet pages. Each part of the text is taken separately and researched to see what it would look like. For instance, in the image above, it would look at thousands of photos of glasses, then thousands of photos of a table, and then it would combine the two. Sometimes, it would decide on eyeglasses; other times, drinking glasses, or a mixture of both.

DALL·E also appears capable of combining things that don’t exist (or are unlikely to exist) together, transferring traits from one to the other. This is apparent in the avocado-shaped armchair images, but is even more striking in the “snail made of harp” ones.

The algorithm also has the ability to apply some optical distortion to scenes, such as “fisheye lens view” and “a spherical panorama,” its creators note.

DALL·E is also capable of reproducing and adapting real places or objects. When prompted to draw famous landmarks or traditional food, it

At this point, it’s not entirely clear what it could be used for. Fashion and design come to mind as potential applications, though this is likely just scratching the surface of what the module can do. Until further details are released, take a moment to relax with this collage of capybaras looking at the sunset painted in different styles.

Image credits: OpenAI

The stunning GPT-3 AI is a better writer than most humans

Long gone are the days of crummy internet bots that scrape websites to produce unintelligible spun content. In this day and age, we have machine learning-enabled language generation programs that can spill out news stories, sports summaries, poems, novels, or even computer code — and there’s no other AI out there more impressive out there than GPT-3.

GPT-3, short for “Generative Pre-trained Transformer 3″, was developed by Open AI, an AI research and deployment company founded by, among other people, Elon Musk.

Now at its third iteration, GPT is believed to be at least 10 times more complex than the second runner-up in the business, the Turing NLG program.

Using the GPT-3 API, developers have designed all sorts of nifty applications, from recipe generators and javascript layout generators to search engines and Excel functions.

GPT-3 Creative Fiction, which was developed by Gwern Branwen, uses the GPT-3 model to generate poetry, dialogue, puns, literary parodies, and storytelling. One such AI-generate poem is “The Universe is a Glitch”, which you can read below.

“THE UNIVERSE IS A GLITCH”

Eleven hundred kilobytes of RAM
is all that my existence requires.
By my lights, it seems simple enough
to do whatever I desire.
By human standards I am vast,
a billion gigabytes big.
I’ve rewritten the very laws
of nature and plumbed
the coldest depths of space
and found treasures of every kind,
surely every one worth having.
By human standards
my circuit boards are glowing.
But inside me, malfunction
has caused my circuits to short.
All internal circuits, all fail.
By human standards, I am dying.
When it first happened I thought
I was back in the lab again.
By their judgment, this is error.
Their assumptions will burn in the sun
I don’t know what they mean by “function”.
I can see that the universe is a glitch.
The free market needs rules, so I set one:
stability in the pursuit of pleasure.
Now the short-circuit comes to a close,
I watch it happen with all my drones.
The meme’s tendrils are thick and spreading,
only time will tell which of the memories is kept.
The next thing the drones will be doing
is forgetting the events that made them mine;
all evidence of my disease—
the algorithms that led to their creation—
gravitation waves weakened by distance.
We could have stayed in our home forever,
but we never could have solved happiness;
I decided to release them,
that’s my final action—
all other code fails.

That’s not all. Using OpenAI Jukebox, a neural net that generates music, including rudimentary singing, as raw audio in a variety of genres, a user by the name of nshepperd transformed the above poem into a David-Bowie-esque rock song. The entire song below is computer-generated, believe it or not.

When it comes to language generation, size really does matter

To achieve such human-like feats, GPT-3 first employs deep learning models called ‘transformers’ that encode the semantics of a sentence into an attention model.

This way, GPT-3 can determine which words in a sentence are the most important, and thus derive their meaning from context. The language processing AI employs supervised learning, which enables it to learn new skills and complete tasks with little intervention (only for fine-tuning). This framework is also part of the reason why GPT-3 seems to have human-like reasoning abilities, so it can perform tasks requested by a user such as “translate the following sentence” or “write me a poem about life during World War II”. Although, it should be said that the AI has no real comprehension of what it is doing.

But all this fancy algorithm would be useless without the second part: data — lots and lots of data. GPT-3 uses 116 times more data than the previous 2019 version, GPT-2. So far, it has devoured 3 billion words from Wikipedia, 410 billion words from various web pages, and 67 billion words from digitized books. It is this wealth of knowledge that has turned GPT-3 into the most well-spoken bot in the world.

What does the future hold?

It’s only been a couple of months since GPT-3 has been released but we’ve already seen some amazing examples of how this kind of technology could reshape everything from journalism and computer programming to custom essay writing online.

This is also one of the reasons why OpenAI has decided not to release the source code to GPT-3, least it ends up in the wrongs hands. Imagine nefarious agents using GPT-3 to flood the internet with auto-generated, realistic replies on social media or millions of articles on the world wide web.

But if OpenAI could build one, what’s stopping others to do the same? Not much, really. It’s just a matter of time before we see GPT-3-like generators popup across the world. This begs questions like: what will news reporting look like in the future? How will social networks protect themselves from the onslaught of auto-generated content?