Amazon Bedrock is now GA
In April, Amazon announced Amazon Bedrock as their center stage product for building generative-ai on AWS. Seemingly caught off-guard by the incredible power of existing models like OpenAI, Dall-E, and MidJourney, Amazon has been pouring resources into getting their hallmark competitor product to market. Their excessive usage of the term “AI” in recent shareholder letters, interviews, tweets, and pretty much any time Adam Selipsky gets near a microphone, telegraphs how much Amazon cares about this industry.
Just a few days ago, Amazon Bedrock became Generally Available, and you can get started today with a half-dozen or so foundational models. These foundational models are meant for AI simpletons like myself, who have no knowledge or desire to develop their own custom models, but like to foray into AI from time to time. These foundational models include providers like A121 Labs, Anthropic, Cohere, Stability AI, Amazon, and soon to be added Meta’s Llama 2. These models each offer different types of functionality including chatbot features, text summarization, draft generation, text to image conversion, and many more. For a full list of models and their features, visit the Amazon Bedrock home page.
Now you can certainly do quite a bit with these pre-trained models, but if you’re looking to develop novel applications, you may want to develop your own to perform a custom task like recognizing illnesses in x-rays. For these types of use cases, you can choose to adapt pre-trained models to perform specific tasks by providing and labelling your own training data. These models won’t necessarily be as good as those developed from scratch, but they offer the ability to develop custom task workloads with relatively low cost and effort.
Speaking of cost, one of the nice things about using Bedrock is that it’s a fully managed AWS service. This means there are no servers or infrastructure to manage, all of the complexity is handled by AWS behind the scenes. Instead, the service offers a pricing model in two modes for inference: on-demand and provisioned. For on-demand, you pay based on the number of input and output tokens generated in your prompts. I’m not a big fan of this pricing model by the way since the pricing is extremely dependent on the model that you choose to use. I feel like AWS can do better here to streamline the pricing experience. You can check out the pricing model for yourself here.
Provisioned mode is better suited for workloads that require a steady stream of processing tokens. Think of large applications that have a constant stream of user traffic. In this mode, depending on the commitment you choose, 0 months, 1 month or 6 months, different cost points are available with longer commitments meaning lower prices. Do keep in mind though that you will still need to pay by the hour for each model that you select. Annnnd the costs can be pretty steep (at worst $63.00 per hour for the Claude model).
A nice thing about the Bedrock console experience is the Playground mode. In this mode, you can interact with many of the different models directly in the AWS console. This allows you to test out chatbot functionalities, text summarization, image generation, and more. For example, here’s an image of a “a fluffy dog sitting on a porch in front of a red door, shot with a canon 5d” that I generated using Stability AI’s Stable Diffusion model. What a cute little fella!
An AI generated dog using Amazon Bedrock + Stable Diffusion. |
You also get to fine tune the input parameters to play with how the model generates its output. To make this even better, AWS offers a nifty button that lets you copy the input configuration you’ve used for your last run. This makes it super easy to move over to your code and just pop the input into your API request. Here’s an example of the same API request I used to generate the dog picture above:
Auto-generated API inputs make it super easy to get started with Bedrock. |
Very nifty.
In addition to playing with the Playground, you can also interact with Bedrock programmatically using a set of APIs. Using the copied input similar to that of above, I was able to invoke the inference api, invoke_model to generate a hypothetical course lecture on AI. This API is synchronous, but there is an alternative API that live streams the content back as the tokens are generated. Do keep in mind that not all SDKs (atleast to my knowledge) have been updated yet with the bedrock APIs. Python’s boto3 though has been updated and is available now. You can check out the code I used here to get an idea of how it works.
Rough Around the Edges
I do have to say that I had a lot of fun tinkering around for a couple hours using Bedrock. I will also say that it is still a bit rough around the edges. Initial prompts I made to the Jurassic Ultra model by A121 labs produced some embarrassing output like below:
Not everything is perfect… |
The interface is OK, but feels rushed and put together in a bit of a sloppy format. Certain info tabs show up behind popped up dialogue boxes and are unreadable unless you first minimize the box.
In another case, the Bedrock console somehow managed to crash my Firefox browser requiring a good-ol force kill operation to give me back control. These small errors aren’t a huge deal and are somewhat expected for a new product, but it would be nice if the experience was a bit more refined.
Overall though, I’m excited to see how Bedrock shapes up in this rapidly evolving landscape of cloud providers duking it out for AI market share. With Bedrock, the ability to use a wide variety of pre-trained models and integrating them with your applications feels like something many companies will be attracted to. Its now easier than ever to easily take advantage of cutting edge models and integrating the diverse capabilities of generative-ai in your domain. I expect the features and options will only get better in time.
-Daniel