Rabbit's r1 refines chats and timers, but its app-using 'action model' is still MIA

Image Credits: Rabbit

Rabbit’s r1, the AI assistant gadget whose hype train has somewhat slowed since its debut at CES, has some updates to share — but probably not anything that will change its critics’ minds just yet.

A new “beta rabbit” mode adds some conversational AI chops to the device, particularly in more complex or multi-step instructions. It should also be better at asking follow-up questions when it isn’t sure about something.

As an example, they offer something like the following:

“beta rabbit, can you suggest three books similar to ‘the power of now’, include page length, year of release, and ratings, and save that as a note titled ‘reading list’. also include pictures of the authors”

[followed by:] “beta rabbit, can you also get me summaries for those three books?”

Setting a travel itinerary and finding deals or recommendations for products are some other suggested options. As occasional users of chatbots ourselves, we’ve found this type of task is impressive as a demo, but seldom useful in reality.

For example, chatbot itineraries can be weird and unpredictable, and comparing specs and prices shows web-scraping prowess but ultimately is inconvenient on such a small device. And who is going to trust such haphazardly sourced book recommendations?

There is also some improved alarm and timer stuff (you can see all the new stuff here), certainly welcome but also occasionally falling into the “wait a second…” category. For instance, “Set a timer for baking chocolate chip cookies”: At what temperature? How many? What kind of cookies? This is a recipe for culinary disaster. The AI can’t possibly know that stuff. On the other hand, it would be perfectly reasonable to ask it, “How long should two dozen chocolate chip cookies go in the oven at 300 degrees for?”

Of course, what everyone is waiting on is the much-vaunted yet so far highly elusive “large action model” stuff the company was touting back in January. The pitch — which I took as aspirational but not ridiculous then — was that the model was trained on phone and web app interfaces and would be able to navigate them autonomously to accomplish user-chosen tasks. So far that capability has not been shown outside of demos, or if an action is claimed to be using it, it’s indistinguishable from what an API or ordinary action scripting could accomplish.

I remain optimistic at the eventual utility of this funky little gadget, which is why, although I have had almost no occasion to use it since we got one to review, I haven’t banished it to a drawer — yet.

I’ve asked rabbit about when we can expect news on the LAM and will update this post if I hear back.

发表回复

您的电子邮箱地址不会被公开。 必填项已用 * 标注