Major Update

Major Update

Jun 18, 2024

Introducing Local III

In the same way the automobile granted us personal freedom to explore, Local III starts our journey towards a new freedom — personal, private access to machine intelligence.

Developed by over 100 contributors spanning every timezone, this update includes an easy-to-use local model explorer, deep integrations with inference engines like Ollama, custom profiles for open models like Llama3, Moondream, and Codestral, and a suite of settings to make offline code interpretation more reliable.

Local III also introduces a free, hosted, opt-in model via interpreter --model i. Conversations with the i model will be used to train our own open-source language model for computer control.

The Local Explorer

Local III makes it easier than ever to use local models. With an interactive setup, you can select an inference provider, select a model, download new models, and more.

Want to add an inference engine? Please make a PR into the local explorer here.

The following flag starts the local explorer:

interpreter --local

The i Model

Local III also introduces a free language model endpoint serving Llama3-70B. This endpoint provides users with a setup-free experience while contributing to the training of a small, locally running language model.

We will remove personally identifiable information before open-sourcing the model and the training set.

By engaging with this model, you become an active participant in shaping the future of open-source AI:

interpreter --model

Deep Ollama Integration

To give any Ollama model access to a code interpreter, simply run:

interpreter --model

Where model is a model from Ollama's model library. This unified command abstracts away all model setup commands. It will only download the model if you haven't downloaded it before.

Optimized Profiles

We have experimented extensively with two SOTA local language models, codestral and llama3. You can configure Open Interpreter to use our recommended settings with the following flags:

interpreter --profile # Sets optimal settings for Codestral
interpreter --profile # Sets optimal settings for Llama3
interpreter --profile # Sets optimal settings for Qwen

Note: The profile flag will load settings from files in the profiles directory, which you can open by running:

interpreter --profiles

If you find optimal settings for other local language models, please contribute a PR into the default profiles folder. Simply duplicate a file like and configure the model setup, system message, few shot examples, etc.

You can learn more about profiles here.

Local Vision

Images sent to local models are rendered as a description of the image generated by Moondream, a tiny vision model. The model also receives OCR extracted from the image.

interpreter --local --vision

Experimental Local OS Mode

By enabling local vision, Local III also enables experimental local OS mode support.

In this mode, Open Interpreter can control your mouse, keyboard, and see your screen. The LLM can interact with your computer by clicking icons identified by our open-source Point model.

interpreter --local --os

Why Local?

If this revolution is to broadly distribute its benefits, it must belong to the people. In classical computing, society transitioned away from the mainframe era of access to build the personal computer. This helped ensure a destiny for computers which we could control.

Now, an oligopoly of language model providers stand to control the intelligence age. Open Interpreter is a balancing force against that. Our community is rapidly developing a response to ensure our collective freedom— private, local access to powerful AI agents.

Local III is a step towards a new destiny which we, the people, control.

Top Contributors

Special thanks to Ty Fiero, Anton, and CyanideByte for their excellent contributions this release cycle!

All Updates

* Fix Jupyter logging on shutdown by @tyfiero in

* Check for GPU or MPS availability before using CPU by @jcp in

* Add DevContainer Support by @weihongliang233 in

* Updated to address comment regarding pip installer not working by changing from bash to zsh. by @MartinLBeacham in

* Added 'py' alias for Python/JupyterLanguage by @CyanideByte in

* [FIX] Broken Link to Setup Fixed by @Sandeepsuresh1998 in

* Feature/ruby support by @bars0um in

* Fixed task completion message looping by @CyanideByte in

* Generate history conversation filenames in Chinese properly. by @Steve235lab in

* Display Link to Docs with Unrecognized Flags by @benxu3 in

* Add offline doc how-to in README by @dheavy in

* Fixed bug, may have broken something else but I don't think so by @imapersonman in

* Add argument minor refactor by @MikeBirdTech in

* Added flag reset to re-import `computer` instance by @meawal in

* Modify API key storage user recommendation by @rustom in

* Fix/profile disable_telemetry not working by @LucienShui in

* Fix default variable issue.  by @tyfiero in

* Remove pydantic warnings by @imapersonman in

* Multiple display support by @Amazingct in

* Updated litellm now that they fixed pydantic warning by @CyanideByte in

* Fix computer.calendar dates issue by @supersational in

* Optimize rendering of dynamic messages in by @kooroshkz in

* Ignore empty messages by @CyanideByte in

* Fix optional import crash and error by @CyanideByte in

* Add function to contribute conversations by @tyfiero in

* Add Ollama with llama3 as Default by @tyfiero in

* Segmented default.yaml into sections so it is clearer how to nest them by @zdaar in

* Remove config from docs by @MikeBirdTech in

* Fix Llama3 backtick hallucination in code blocks by @CyanideByte in

* Fix %% magic command by @Notnaton in

* Refine documentation formatting and style for clarity by @RateteApple in

* Bump version of tiktoken by @minamorl in

* Update to use litellm.support_function_calling() by @Notnaton in

* update local mode system message by @MikeBirdTech in

* Update local profile so it doen't use function calling by @Notnaton in

* Add local OS profile for local OS control by @MikeBirdTech in

* Update litellm for namespace conflict warning fix by @CyanideByte in

* Spanish readme translation by palnever from discord by @CyanideByte in

* docs: update streaming-response.mdx by @eltociear in

* 更新了 by @KPCOFGS in

* Contributing interaction and sending command probably by @imapersonman in

* Added batch, bat aliases for shell language by @CyanideByte in

* Local update tons of fixes and new llamafiles by @CyanideByte in

* Fix llama 3 code halucination by @Notnaton in

* Fixed linux installer by @okineadev in

* Updated installation scripts by @okineadev in

* fix typos by @RainRat in

New Contributors

* @jcp made their first contribution in

* @weihongliang233 made their first contribution in

* @MartinLBeacham made their first contribution in

* @Sandeepsuresh1998 made their first contribution in

* @benxu3 made their first contribution in

* @dheavy made their first contribution in

* @imapersonman made their first contribution in

* @meawal made their first contribution in

* @rustom made their first contribution in

* @LucienShui made their first contribution in

* @Amazingct made their first contribution in

* @supersational made their first contribution in

* @kooroshkz made their first contribution in

* @zdaar made their first contribution in

* @RateteApple made their first contribution in

* @minamorl made their first contribution in

* @KPCOFGS made their first contribution in

* @okineadev made their first contribution in

* @RainRat made their first contribution in

**Full Changelog**:

Subscribe to future changes

Get notified when we release new features.