Google Bard – A Primer

Google Bard – A Primer
Google Bard – A Primer

Today we’re starting to open access to Bard, an early experiment that lets you collaborate with generative AI. This follows our announcements from last week as we continue to bring helpful AI experiences to people, businesses and communities.

Bard is Google AI at your service

You can use Bard to boost your productivity, accelerate your ideas and fuel your curiosity. You might ask Bard to give you tips to reach your goal of reading more books this year, explain quantum physics in simple terms or spark your creativity by outlining a blog post.

A machine for creativity

Although it’s important to be aware of challenges like these, there are still incredible benefits to LLMs, like jumpstarting human productivity, creativity and curiosity. And so, when using Bard, you’ll often get the choice of a few different drafts of its response so you can pick the best starting point for you.

You can continue to collaborate with Bard from there, asking follow-up questions. And if you want to see an alternative, you can always have Bard try again.

Bard is a direct interface to an LLM, and we think of it as a complementary experience to Google Search. Bard is designed so that you can easily visit Search to check its responses or explore sources across the web.

Click “Google it” to see suggestions for queries, and Search will open in a new tab so you can find relevant results and dig deeper. We’ll also be thoughtfully integrating LLMs into Search in a deeper way — more to come.

Large Language Models

Bard is powered by a research large language model (LLM), specifically a lightweight and optimized version of LaMDA, and will be updated with newer, more capable models over time. 

Bard AI is grounded in Google’s understanding of quality information. You can think of an LLM as a prediction engine. When given a prompt, it generates a response by selecting, one word at a time, from words that are likely to come next.

Picking the most probable choice every time wouldn’t lead to very creative responses, so there’s some flexibility factored in. 

Bard can be wrong

While LLMs are an exciting technology, they’re not without their faults. For instance, because they learn from a wide range of information that reflects real-world biases and stereotypes, those sometimes show up in their outputs.

And they can provide inaccurate, misleading, or false information while presenting it confidently. For example, when asked to share a couple suggestions for easy indoor plants, Bard convincingly presented ideas… But it got some things wrong, like the scientific name for the ZZ plant.

Source