# Ollama JavaScript Library The Ollama JavaScript library provides the easiest way to integrate your JavaScript project with [Ollama](https://github.com/jmorganca/ollama). ## Getting Started ``` npm i ollama ``` ## Usage ```javascript import ollama from 'ollama' const response = await ollama.chat({ model: 'llama3.1', messages: [{ role: 'user', content: 'Why is the sky blue?' }], }) console.log(response.message.content) ``` ### Browser Usage To use the library without node, import the browser module. ```javascript import ollama from 'ollama/browser' ``` ## Streaming responses Response streaming can be enabled by setting `stream: true`, modifying function calls to return an `AsyncGenerator` where each part is an object in the stream. ```javascript import ollama from 'ollama' const message = { role: 'user', content: 'Why is the sky blue?' } const response = await ollama.chat({ model: 'llama3.1', messages: [message], stream: true }) for await (const part of response) { process.stdout.write(part.message.content) } ``` ## Create ```javascript import ollama from 'ollama' const modelfile = ` FROM llama3.1 SYSTEM "You are mario from super mario bros." ` await ollama.create({ model: 'example', modelfile: modelfile }) ``` ## API The Ollama JavaScript library's API is designed around the [Ollama REST API](https://github.com/jmorganca/ollama/blob/main/docs/api.md) ### chat ```javascript ollama.chat(request) ``` - `request` ``: The request object containing chat parameters. - `model` `` The name of the model to use for the chat. - `messages` ``: Array of message objects representing the chat history. - `role` ``: The role of the message sender ('user', 'system', or 'assistant'). - `content` ``: The content of the message. - `images` ``: (Optional) Images to be included in the message, either as Uint8Array or base64 encoded strings. - `format` ``: (Optional) Set the expected format of the response (`json`). - `stream` ``: (Optional) When true an `AsyncGenerator` is returned. - `keep_alive` ``: (Optional) How long to keep the model loaded. - `tools` ``: (Optional) A list of tool calls the model may make. - `options` ``: (Optional) Options to configure the runtime. - Returns: `` ### generate ```javascript ollama.generate(request) ``` - `request` ``: The request object containing generate parameters. - `model` `` The name of the model to use for the chat. - `prompt` ``: The prompt to send to the model. - `suffix` ``: (Optional) Suffix is the text that comes after the inserted text. - `system` ``: (Optional) Override the model system prompt. - `template` ``: (Optional) Override the model template. - `raw` ``: (Optional) Bypass the prompt template and pass the prompt directly to the model. - `images` ``: (Optional) Images to be included, either as Uint8Array or base64 encoded strings. - `format` ``: (Optional) Set the expected format of the response (`json`). - `stream` ``: (Optional) When true an `AsyncGenerator` is returned. - `keep_alive` ``: (Optional) How long to keep the model loaded. - `options` ``: (Optional) Options to configure the runtime. - Returns: `` ### pull ```javascript ollama.pull(request) ``` - `request` ``: The request object containing pull parameters. - `model` `` The name of the model to pull. - `insecure` ``: (Optional) Pull from servers whose identity cannot be verified. - `stream` ``: (Optional) When true an `AsyncGenerator` is returned. - Returns: `` ### push ```javascript ollama.push(request) ``` - `request` ``: The request object containing push parameters. - `model` `` The name of the model to push. - `insecure` ``: (Optional) Push to servers whose identity cannot be verified. - `stream` ``: (Optional) When true an `AsyncGenerator` is returned. - Returns: `` ### create ```javascript ollama.create(request) ``` - `request` ``: The request object containing create parameters. - `model` `` The name of the model to create. - `path` ``: (Optional) The path to the Modelfile of the model to create. - `modelfile` ``: (Optional) The content of the Modelfile to create. - `stream` ``: (Optional) When true an `AsyncGenerator` is returned. - Returns: `` ### delete ```javascript ollama.delete(request) ``` - `request` ``: The request object containing delete parameters. - `model` `` The name of the model to delete. - Returns: `` ### copy ```javascript ollama.copy(request) ``` - `request` ``: The request object containing copy parameters. - `source` `` The name of the model to copy from. - `destination` `` The name of the model to copy to. - Returns: `` ### list ```javascript ollama.list() ``` - Returns: `` ### show ```javascript ollama.show(request) ``` - `request` ``: The request object containing show parameters. - `model` `` The name of the model to show. - `system` ``: (Optional) Override the model system prompt returned. - `template` ``: (Optional) Override the model template returned. - `options` ``: (Optional) Options to configure the runtime. - Returns: `` ### embed ```javascript ollama.embed(request) ``` - `request` ``: The request object containing embedding parameters. - `model` `` The name of the model used to generate the embeddings. - `input` ` | `: The input used to generate the embeddings. - `truncate` ``: (Optional) Truncate the input to fit the maximum context length supported by the model. - `keep_alive` ``: (Optional) How long to keep the model loaded. - `options` ``: (Optional) Options to configure the runtime. - Returns: `` ### ps ```javascript ollama.ps() ``` - Returns: `` ### abort ```javascript ollama.abort() ``` This method will abort all streamed generations currently running. All asynchronous threads listening to streams (typically the ```for await (const part of response)```) will throw an ```AbortError``` exception ## Custom client A custom client can be created with the following fields: - `host` ``: (Optional) The Ollama host address. Default: `"http://127.0.0.1:11434"`. - `fetch` ``: (Optional) The fetch library used to make requests to the Ollama host. ```javascript import { Ollama } from 'ollama' const ollama = new Ollama({ host: 'http://127.0.0.1:11434' }) const response = await ollama.chat({ model: 'llama3.1', messages: [{ role: 'user', content: 'Why is the sky blue?' }], }) ``` ## Building To build the project files run: ```sh npm run build ```