instructor-php
Structured data outputs with LLMs, in PHP. Designed for simplicity, transparency, and control.
Stars: 187
Instructor for PHP is a library designed for structured data extraction in PHP, powered by Large Language Models (LLMs). It simplifies the process of extracting structured, validated data from unstructured text or chat sequences. Instructor enhances workflow by providing a response model, validation capabilities, and max retries for requests. It supports classes as response models and provides features like partial results, string input, extracting scalar and enum values, and specifying data models using PHP type hints or DocBlock comments. The library allows customization of validation and provides detailed event notifications during request processing. Instructor is compatible with PHP 8.2+ and leverages PHP reflection, Symfony components, and SaloonPHP for communication with LLM API providers.
README:
Structured data extraction in PHP, powered by LLMs. Designed for simplicity, transparency, and control.
Instructor is a library that allows you to extract structured, validated data from multiple types of inputs: text, images or OpenAI style chat sequence arrays. It is powered by Large Language Models (LLMs).
Instructor simplifies LLM integration in PHP projects. It handles the complexity of extracting structured data from LLM outputs, so you can focus on building your application logic and iterate faster.
Instructor for PHP is inspired by the Instructor library for Python created by Jason Liu.
Here's a simple CLI demo app using Instructor to extract structured data from text:
- Get structured responses from LLMs without writing boilerplate code
- Validation of returned data
- Automated retries in case of errors when LLM responds with invalid data
- Integrate LLM support into your existing PHP code with minimal friction - no framework, no extensive code changes
- Process various types of input data: text, series of chat messages or images using the same, simple API
- 'Structured-to-structured' processing - provide object or array as an input and get object with the results of inference back
- Demonstrate examples to improve the quality of inference
- Define response data model the way you want: type-hinted classes, JSON Schema arrays, or dynamic data shapes with
Structure
class - Customize prompts and retry prompts
- Use attributes or PHP DocBlocks to provide additional instructions for LLM
- Customize response model processing by providing your own implementation of schema, deserialization, validation and transformation interfaces
- Supports both synchronous or streaming responses
- Get partial updates & stream completed sequence items
- Get detailed insight into internal processing via events
- Debug mode to see the details of LLM API requests and responses
- Easily switch between LLM providers
- Support for most popular LLM APIs (incl. OpenAI, Gemini, Anthropic, Cohere, Azure, Groq, Mistral, Fireworks AI, Together AI)
- OpenRouter support - access to 100+ language models
- Use local models with Ollama
- Developer friendly LLM context caching for reduced costs and faster inference (for Anthropic models)
- Developer friendly data extraction from images (for OpenAI, Anthropic and Gemini models)
- Learn more from growing documentation and 50+ cookbooks
Check out implementations in other languages below:
- Python (original)
- Javascript (port)
- Elixir (port)
If you want to port Instructor to another language, please reach out to us on Twitter we'd love to help you get started!
Instructor introduces three key enhancements compared to direct API usage.
You just specify a PHP class to extract data into via the 'magic' of LLM chat completion. And that's it.
Instructor reduces brittleness of the code extracting the information from textual data by leveraging structured LLM responses.
Instructor helps you write simpler, easier to understand code - you no longer have to define lengthy function call definitions or write code for assigning returned JSON into target data objects.
Response model generated by LLM can be automatically validated, following set of rules. Currently, Instructor supports only Symfony validation.
You can also provide a context object to use enhanced validator capabilities.
You can set the number of retry attempts for requests.
Instructor will repeat requests in case of validation or deserialization error up to the specified number of times, trying to get a valid response from LLM.
Installing Instructor is simple. Run following command in your terminal, and you're on your way to a smoother data handling experience!
composer require cognesy/instructor-php
This is a simple example demonstrating how Instructor retrieves structured information from provided text (or chat message sequence).
Response model class is a plain PHP class with typehints specifying the types of fields of the object.
use Cognesy\Instructor\Instructor;
// Step 0: Create .env file in your project root:
// OPENAI_API_KEY=your_api_key
// Step 1: Define target data structure(s)
class Person {
public string $name;
public int $age;
}
// Step 2: Provide content to process
$text = "His name is Jason and he is 28 years old.";
// Step 3: Use Instructor to run LLM inference
$person = (new Instructor)->respond(
messages: $text,
responseModel: Person::class,
);
// Step 4: Work with structured response data
assert($person instanceof Person); // true
assert($person->name === 'Jason'); // true
assert($person->age === 28); // true
echo $person->name; // Jason
echo $person->age; // 28
var_dump($person);
// Person {
// name: "Jason",
// age: 28
// }
NOTE: Instructor supports classes / objects as response models. In case you want to extract simple types or enums, you need to wrap them in Scalar adapter - see section below: Extracting Scalar Values.
Instructor allows you to define multiple API connections in llm.php
file.
This is useful when you want to use different LLMs or API providers in your application.
Default configuration is located in /config/llm.php
in the root directory
of Instructor codebase. It contains a set of predefined connections to all LLM APIs
supported out-of-the-box by Instructor.
Config file defines connections to LLM APIs and their parameters. It also specifies the default connection to be used when calling Instructor without specifying the client connection.
/* This is fragment of /config/llm.php file */
'defaultConnection' => 'openai',
//...
'connections' => [
'anthropic' => [ ... ],
'cohere2' => [ ... ],
'gemini' => [ ... ],
'ollama' => [
'clientType' => ClientType::Ollama->value,
'apiUrl' => Env::get('OLLAMA_API_URL', 'http://localhost:11434/v1'),
'apiKey' => Env::get('OLLAMA_API_KEY', ''),
'defaultModel' => Env::get('OLLAMA_DEFAULT_MODEL', 'gemma2:2b'),
'defaultMaxTokens' => Env::get('OLLAMA_DEFAULT_MAX_TOKENS', 1024),
'connectTimeout' => Env::get('OLLAMA_CONNECT_TIMEOUT', 3),
'requestTimeout' => Env::get('OLLAMA_REQUEST_TIMEOUT', 30),
],
// ...
To customize the available connections you can either modify existing entries or add your own.
Connecting to LLM API via predefined connection is as simple as calling withClient
method with the connection name.
<?php
// ...
$user = (new Instructor)
->withConnection('ollama')
->respond(
messages: "His name is Jason and he is 28 years old.",
responseModel: Person::class,
);
// ...
You can change the location of the configuration files for Instructor to use via
INSTRUCTOR_CONFIG_PATH
environment variable. You can use copies of the default
configuration files as a starting point.
Instructor offers a way to use structured data as an input. This is useful when you want to use object data as input and get another object with a result of LLM inference.
The input
field of Instructor's respond()
and request()
methods
can be an object, but also an array or just a string.
<?php
use Cognesy\Instructor\Instructor;
class Email {
public function __construct(
public string $address = '',
public string $subject = '',
public string $body = '',
) {}
}
$email = new Email(
address: 'joe@gmail',
subject: 'Status update',
body: 'Your account has been updated.'
);
$translation = (new Instructor)->respond(
input: $email,
responseModel: Email::class,
prompt: 'Translate the text fields of email to Spanish. Keep other fields unchanged.',
);
assert($translation instanceof Email); // true
dump($translation);
// Email {
// address: "joe@gmail",
// subject: "Actualización de estado",
// body: "Su cuenta ha sido actualizada."
// }
?>
Instructor validates results of LLM response against validation rules specified in your data model.
For further details on available validation rules, check Symfony Validation constraints.
use Symfony\Component\Validator\Constraints as Assert;
class Person {
public string $name;
#[Assert\PositiveOrZero]
public int $age;
}
$text = "His name is Jason, he is -28 years old.";
$person = (new Instructor)->respond(
messages: [['role' => 'user', 'content' => $text]],
responseModel: Person::class,
);
// if the resulting object does not validate, Instructor throws an exception
In case maxRetries parameter is provided and LLM response does not meet validation criteria, Instructor will make subsequent inference attempts until results meet the requirements or maxRetries is reached.
Instructor uses validation errors to inform LLM on the problems identified in the response, so that LLM can try self-correcting in the next attempt.
use Symfony\Component\Validator\Constraints as Assert;
class Person {
#[Assert\Length(min: 3)]
public string $name;
#[Assert\PositiveOrZero]
public int $age;
}
$text = "His name is JX, aka Jason, he is -28 years old.";
$person = (new Instructor)->respond(
messages: [['role' => 'user', 'content' => $text]],
responseModel: Person::class,
maxRetries: 3,
);
// if all LLM's attempts to self-correct the results fail, Instructor throws an exception
You can call request()
method to set the parameters of the request and then call get()
to get the response.
use Cognesy\Instructor\Instructor;
$instructor = (new Instructor)->request(
messages: "His name is Jason, he is 28 years old.",
responseModel: Person::class,
);
$person = $instructor->get();
Instructor supports streaming of partial results, allowing you to start processing the data as soon as it is available.
<?php
use Cognesy\Instructor\Instructor;
$stream = (new Instructor)->request(
messages: "His name is Jason, he is 28 years old.",
responseModel: Person::class,
options: ['stream' => true]
)->stream();
foreach ($stream as $partialPerson) {
// process partial person data
echo $partialPerson->name;
echo $partialPerson->age;
}
// after streaming is done you can get the final, fully processed person object...
$person = $stream->getLastUpdate()
// ...to, for example, save it to the database
$db->save($person);
?>
You can define onPartialUpdate()
callback to receive partial results that can be used to start updating UI before LLM completes the inference.
NOTE: Partial updates are not validated. The response is only validated after it is fully received.
use Cognesy\Instructor\Instructor;
function updateUI($person) {
// Here you get partially completed Person object update UI with the partial result
}
$person = (new Instructor)->request(
messages: "His name is Jason, he is 28 years old.",
responseModel: Person::class,
options: ['stream' => true]
)->onPartialUpdate(
fn($partial) => updateUI($partial)
)->get();
// Here you get completed and validated Person object
$this->db->save($person); // ...for example: save to DB
You can provide a string instead of an array of messages. This is useful when you want to extract data from a single block of text and want to keep your code simple.
// Usually, you work with sequences of messages:
$value = (new Instructor)->respond(
messages: [['role' => 'user', 'content' => "His name is Jason, he is 28 years old."]],
responseModel: Person::class,
);
// ...but if you want to keep it simple, you can just pass a string:
$value = (new Instructor)->respond(
messages: "His name is Jason, he is 28 years old.",
responseModel: Person::class,
);
Sometimes we just want to get quick results without defining a class for the response model, especially if we're trying to get a straight, simple answer in a form of string, integer, boolean or float. Instructor provides a simplified API for such cases.
use Cognesy\Instructor\Extras\Scalar\Scalar;
use Cognesy\Instructor\Instructor;
$value = (new Instructor)->respond(
messages: "His name is Jason, he is 28 years old.",
responseModel: Scalar::integer('age'),
);
var_dump($value);
// int(28)
In this example, we're extracting a single integer value from the text. You can also use Scalar::string()
, Scalar::boolean()
and Scalar::float()
to extract other types of values.
Additionally, you can use Scalar adapter to extract one of the provided options by using Scalar::enum()
.
use Cognesy\Instructor\Extras\Scalar\Scalar;
use Cognesy\Instructor\Instructor;
enum ActivityType : string {
case Work = 'work';
case Entertainment = 'entertainment';
case Sport = 'sport';
case Other = 'other';
}
$value = (new Instructor)->respond(
messages: "His name is Jason, he currently plays Doom Eternal.",
responseModel: Scalar::enum(ActivityType::class, 'activityType'),
);
var_dump($value);
// enum(ActivityType:Entertainment)
Sequence is a wrapper class that can be used to represent a list of objects to be extracted by Instructor from provided context.
It is usually more convenient not create a dedicated class with a single array property just to handle a list of objects of a given class.
Additional, unique feature of sequences is that they can be streamed per each completed item in a sequence, rather than on any property update.
class Person
{
public string $name;
public int $age;
}
$text = <<<TEXT
Jason is 25 years old. Jane is 18 yo. John is 30 years old
and Anna is 2 years younger than him.
TEXT;
$list = (new Instructor)->respond(
messages: [['role' => 'user', 'content' => $text]],
responseModel: Sequence::of(Person::class),
options: ['stream' => true]
);
See more about sequences in the Sequences section.
Use PHP type hints to specify the type of extracted data.
Use nullable types to indicate that given field is optional.
class Person {
public string $name;
public ?int $age;
public Address $address;
}
You can also use PHP DocBlock style comments to specify the type of extracted data. This is useful when you want to specify property types for LLM, but can't or don't want to enforce type at the code level.
class Person {
/** @var string */
public $name;
/** @var int */
public $age;
/** @var Address $address person's address */
public $address;
}
See PHPDoc documentation for more details on DocBlock website.
PHP currently does not support generics or typehints to specify array element types.
Use PHP DocBlock style comments to specify the type of array elements.
class Person {
// ...
}
class Event {
// ...
/** @var Person[] list of extracted event participants */
public array $participants;
// ...
}
Instructor can retrieve complex data structures from text. Your response model can contain nested objects, arrays, and enums.
use Cognesy\Instructor\Instructor;
// define a data structures to extract data into
class Person {
public string $name;
public int $age;
public string $profession;
/** @var Skill[] */
public array $skills;
}
class Skill {
public string $name;
public SkillType $type;
}
enum SkillType {
case Technical = 'technical';
case Other = 'other';
}
$text = "Alex is 25 years old software engineer, who knows PHP, Python and can play the guitar.";
$person = (new Instructor)->respond(
messages: [['role' => 'user', 'content' => $text]],
responseModel: Person::class,
); // client is passed explicitly, can specify e.g. different base URL
// data is extracted into an object of given class
assert($person instanceof Person); // true
// you can access object's extracted property values
echo $person->name; // Alex
echo $person->age; // 25
echo $person->profession; // software engineer
echo $person->skills[0]->name; // PHP
echo $person->skills[0]->type; // SkillType::Technical
// ...
var_dump($person);
// Person {
// name: "Alex",
// age: 25,
// profession: "software engineer",
// skills: [
// Skill {
// name: "PHP",
// type: SkillType::Technical,
// },
// Skill {
// name: "Python",
// type: SkillType::Technical,
// },
// Skill {
// name: "guitar",
// type: SkillType::Other
// },
// ]
// }
If you want to define the shape of data during runtime, you can use Structure
class.
Structures allow you to define and modify arbitrary shape of data to be extracted by LLM. Classes may not be the best fit for this purpose, as declaring or changing them during execution is not possible.
With structures, you can define custom data shapes dynamically, for example based on the user input or context of the processing, to specify the information you need LLM to infer from the provided text or chat messages.
Example below demonstrates how to define a structure and use it as a response model:
<?php
use Cognesy\Instructor\Extras\Structure\Field;
use Cognesy\Instructor\Extras\Structure\Structure;
enum Role : string {
case Manager = 'manager';
case Line = 'line';
}
$structure = Structure::define('person', [
Field::string('name'),
Field::int('age'),
Field::enum('role', Role::class),
]);
$person = (new Instructor)->respond(
messages: 'Jason is 25 years old and is a manager.',
responseModel: $structure,
);
// you can access structure data via field API...
assert($person->field('name') === 'Jason');
// ...or as structure object properties
assert($person->age === 25);
?>
For more information see Structures section.
You can specify model and other options that will be passed to OpenAI / LLM endpoint.
use Cognesy\Instructor\Instructor;
use Cognesy\Instructor\Extras\LLM\Data\LLMConfig;
use Cognesy\Instructor\Extras\LLM\Drivers\OpenAIDriver;
// OpenAI auth params
$yourApiKey = Env::get('OPENAI_API_KEY'); // use your own API key
// Create instance of OpenAI driver initialized with custom parameters
$driver = new OpenAIDriver(new LLMConfig(
apiUrl: 'https://api.openai.com/v1', // you can change base URI
apiKey: $yourApiKey,
endpoint: '/chat/completions',
metadata: ['organization' => ''],
model: 'gpt-4o-mini',
maxTokens: 128,
));
/// Get Instructor with the default client component overridden with your own
$instructor = (new Instructor)->withDriver($driver);
$user = $instructor->respond(
messages: "Jason (@jxnlco) is 25 years old and is the admin of this project. He likes playing football and reading books.",
responseModel: User::class,
model: 'gpt-3.5-turbo',
options: ['stream' => true ]
);
Instructor offers out of the box support for following API providers:
- Anthropic
- Azure OpenAI
- Cohere
- Fireworks AI
- Groq
- Mistral
- Ollama (on localhost)
- OpenAI
- OpenRouter
- Together AI
For usage examples, check Hub section or examples
directory in the code repository.
You can use PHP DocBlocks (/** */) to provide additional instructions for LLM at class or field level, for example to clarify what you expect or how LLM should process your data.
Instructor extracts PHP DocBlocks comments from class and property defined and includes them in specification of response model sent to LLM.
Using PHP DocBlocks instructions is not required, but sometimes you may want to clarify your intentions to improve LLM's inference results.
/**
* Represents a skill of a person and context in which it was mentioned.
*/
class Skill {
public string $name;
/** @var SkillType $type type of the skill, derived from the description and context */
public SkillType $type;
/** Directly quoted, full sentence mentioning person's skill */
public string $context;
}
You can use ValidationMixin trait to add ability of easy, custom data object validation.
use Cognesy\Instructor\Validation\Traits\ValidationMixin;
class User {
use ValidationMixin;
public int $age;
public int $name;
public function validate() : array {
if ($this->age < 18) {
return ["User has to be adult to sign the contract."];
}
return [];
}
}
Instructor uses Symfony validation component to validate extracted data. You can use #[Assert/Callback] annotation to build fully customized validation logic.
use Cognesy\Instructor\Instructor;
use Symfony\Component\Validator\Constraints as Assert;
use Symfony\Component\Validator\Context\ExecutionContextInterface;
class UserDetails
{
public string $name;
public int $age;
#[Assert\Callback]
public function validateName(ExecutionContextInterface $context, mixed $payload) {
if ($this->name !== strtoupper($this->name)) {
$context->buildViolation("Name must be in uppercase.")
->atPath('name')
->setInvalidValue($this->name)
->addViolation();
}
}
}
$user = (new Instructor)->respond(
messages: [['role' => 'user', 'content' => 'jason is 25 years old']],
responseModel: UserDetails::class,
maxRetries: 2
);
assert($user->name === "JASON");
See Symfony docs for more details on how to use Callback constraint.
As Instructor for PHP processes your request, it goes through several stages:
- Initialize and self-configure (with possible overrides defined by developer).
- Analyze classes and properties of the response data model specified by developer.
- Encode data model into a schema that can be provided to LLM.
- Execute request to LLM using specified messages (content) and response model metadata.
- Receive a response from LLM or multiple partial responses (if streaming enabled).
- Deserialize response received from LLM into originally requested classes and their properties.
- In case response contained incomplete or corrupted data - if errors are encountered, create feedback message for LLM and requests regeneration of the response.
- Execute validations defined by developer for the data model - if any of them fail, create feedback message for LLM and requests regeneration of the response.
- Repeat the steps 4-8, unless specified limit of retries has been reached or response passes validation
Instructor allows you to receive detailed information at every stage of request and response processing via events.
-
(new Instructor)->onEvent(string $class, callable $callback)
method - receive callback when specified type of event is dispatched -
(new Instructor)->wiretap(callable $callback)
method - receive any event dispatched by Instructor, may be useful for debugging or performance analysis
Receiving events can help you to monitor the execution process and makes it easier for a developer to understand and resolve any processing issues.
$instructor = (new Instructor)
// see requests to LLM
->onEvent(RequestSentToLLM::class, fn($e) => dump($e))
// see responses from LLM
->onEvent(ResponseReceivedFromLLM::class, fn($event) => dump($event))
// see all events in console-friendly format
->wiretap(fn($event) => dump($event->toConsole()));
$instructor->respond(
messages: "What is the population of Paris?",
responseModel: Scalar::integer(),
);
// check your console for the details on the Instructor execution
Instructor is able to process several types of input provided as response model, giving you more flexibility on how you interact with the library.
The signature of respond()
method of Instructor states the responseModel
can be either string, object or array.
If string
value is provided, it is used as a name of the class of the response model.
Instructor checks if the class exists and analyzes the class & properties type information & doc comments to generate a schema needed to specify LLM response constraints.
The best way to provide the name of the response model class is to use NameOfTheClass::class
instead of string, making it possible for IDE to execute type checks, handle refactorings, etc.
If object
value is provided, it is considered an instance of the response model. Instructor checks the class of the instance, then analyzes it and its property type data to specify LLM response constraints.
If array
value is provided, it is considered a raw JSON Schema, therefore allowing Instructor to use it directly in LLM requests (after wrapping in appropriate context - e.g. function call).
Instructor requires information on the class of each nested object in your JSON Schema, so it can correctly deserialize the data into appropriate type.
This information is available to Instructor when you are passing $responseModel as a class name or an instance, but it is missing from raw JSON Schema.
Current design uses JSON Schema $comment
field on property to overcome this. Instructor expects developer to use $comment
field to provide fully qualified name of the target class to be used to deserialize property data of object or enum type.
Instructor allows you to customize processing of $responseModel value also by looking at the interfaces the class or instance implements:
-
CanProvideJsonSchema
- implement to be able to provide JSON Schema or the response model, overriding the default approach of Instructor, which is analyzing $responseModel value class information, -
CanDeserializeSelf
- implement to customize the way the response from LLM is deserialized from JSON into PHP object, -
CanValidateSelf
- implement to customize the way the deserialized object is validated, -
CanTransformSelf
- implement to transform the validated object into target value received by the caller (e.g. unwrap simple type from a class to a scalar value).
PHP ecosystem does not (yet) have a strong equivalent of Pydantic, which is at the core of Instructor for Python.
To provide an essential functionality we needed here Instructor for PHP leverages:
- base capabilities of PHP type system,
- PHP reflection,
- PHP DocBlock type hinting conventions,
- Symfony serialization and validation capabilities
Instructor for PHP is compatible with PHP 8.2 or later and, due to minimal dependencies, should work with any framework of your choice.
- Guzzle
-
Symfony components
- symfony/property-access
- symfony/property-info
- symfony/serializer
- symfony/type-info
- symfony/validator
- adbario/php-dot-notation
- phpdocumentor/reflection-docblock
- phpstan/phpdoc-parser
- vlucas/phpdotenv
Additional dependencies are required for some extras:
- spatie/array-to-xml
- gioni06/gpt3-tokenizer
- [ ] Async support
- [ ] Documentation
If you want to help, check out some of the issues. All contributions are welcome - code improvements, documentation, bug reports, blog posts / articles, or new cookbooks and application examples.
This project is licensed under the terms of the MIT License.
If you have any questions or need help, please reach out to me on Twitter or GitHub.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for instructor-php
Similar Open Source Tools
instructor-php
Instructor for PHP is a library designed for structured data extraction in PHP, powered by Large Language Models (LLMs). It simplifies the process of extracting structured, validated data from unstructured text or chat sequences. Instructor enhances workflow by providing a response model, validation capabilities, and max retries for requests. It supports classes as response models and provides features like partial results, string input, extracting scalar and enum values, and specifying data models using PHP type hints or DocBlock comments. The library allows customization of validation and provides detailed event notifications during request processing. Instructor is compatible with PHP 8.2+ and leverages PHP reflection, Symfony components, and SaloonPHP for communication with LLM API providers.
resonance
Resonance is a framework designed to facilitate interoperability and messaging between services in your infrastructure and beyond. It provides AI capabilities and takes full advantage of asynchronous PHP, built on top of Swoole. With Resonance, you can: * Chat with Open-Source LLMs: Create prompt controllers to directly answer user's prompts. LLM takes care of determining user's intention, so you can focus on taking appropriate action. * Asynchronous Where it Matters: Respond asynchronously to incoming RPC or WebSocket messages (or both combined) with little overhead. You can set up all the asynchronous features using attributes. No elaborate configuration is needed. * Simple Things Remain Simple: Writing HTTP controllers is similar to how it's done in the synchronous code. Controllers have new exciting features that take advantage of the asynchronous environment. * Consistency is Key: You can keep the same approach to writing software no matter the size of your project. There are no growing central configuration files or service dependencies registries. Every relation between code modules is local to those modules. * Promises in PHP: Resonance provides a partial implementation of Promise/A+ spec to handle various asynchronous tasks. * GraphQL Out of the Box: You can build elaborate GraphQL schemas by using just the PHP attributes. Resonance takes care of reusing SQL queries and optimizing the resources' usage. All fields can be resolved asynchronously.
Bard-API
The Bard API is a Python package that returns responses from Google Bard through the value of a cookie. It is an unofficial API that operates through reverse-engineering, utilizing cookie values to interact with Google Bard for users struggling with frequent authentication problems or unable to authenticate via Google Authentication. The Bard API is not a free service, but rather a tool provided to assist developers with testing certain functionalities due to the delayed development and release of Google Bard's API. It has been designed with a lightweight structure that can easily adapt to the emergence of an official API. Therefore, using it for any other purposes is strongly discouraged. If you have access to a reliable official PaLM-2 API or Google Generative AI API, replace the provided response with the corresponding official code. Check out https://github.com/dsdanielpark/Bard-API/issues/262.
VMind
VMind is an open-source solution for intelligent visualization, providing an intelligent chart component based on LLM by VisActor. It allows users to create chart narrative works with natural language interaction, edit charts through dialogue, and export narratives as videos or GIFs. The tool is easy to use, scalable, supports various chart types, and offers one-click export functionality. Users can customize chart styles, specify themes, and aggregate data using LLM models. VMind aims to enhance efficiency in creating data visualization works through dialogue-based editing and natural language interaction.
IntelliNode
IntelliNode is a javascript module that integrates cutting-edge AI models like ChatGPT, LLaMA, WaveNet, Gemini, and Stable diffusion into projects. It offers functions for generating text, speech, and images, as well as semantic search, multi-model evaluation, and chatbot capabilities. The module provides a wrapper layer for low-level model access, a controller layer for unified input handling, and a function layer for abstract functionality tailored to various use cases.
azure-functions-openai-extension
Azure Functions OpenAI Extension is a project that adds support for OpenAI LLM (GPT-3.5-turbo, GPT-4) bindings in Azure Functions. It provides NuGet packages for various functionalities like text completions, chat completions, assistants, embeddings generators, and semantic search. The project requires .NET 6 SDK or greater, Azure Functions Core Tools v4.x, and specific settings in Azure Function or local settings for development. It offers features like text completions, chat completion, assistants with custom skills, embeddings generators for text relatedness, and semantic search using vector databases. The project also includes examples in C# and Python for different functionalities.
mountain-goap
Mountain GOAP is a generic C# GOAP (Goal Oriented Action Planning) library for creating AI agents in games. It favors composition over inheritance, supports multiple weighted goals, and uses A* pathfinding to plan paths through sequential actions. The library includes concepts like agents, goals, actions, sensors, permutation selectors, cost callbacks, state mutators, state checkers, and a logger. It also features event handling for agent planning and execution. The project structure includes examples, API documentation, and internal classes for planning and execution.
semantic-cache
Semantic Cache is a tool for caching natural text based on semantic similarity. It allows for classifying text into categories, caching AI responses, and reducing API latency by responding to similar queries with cached values. The tool stores cache entries by meaning, handles synonyms, supports multiple languages, understands complex queries, and offers easy integration with Node.js applications. Users can set a custom proximity threshold for filtering results. The tool is ideal for tasks involving querying or retrieving information based on meaning, such as natural language classification or caching AI responses.
embodied-agents
Embodied Agents is a toolkit for integrating large multi-modal models into existing robot stacks with just a few lines of code. It provides consistency, reliability, scalability, and is configurable to any observation and action space. The toolkit is designed to reduce complexities involved in setting up inference endpoints, converting between different model formats, and collecting/storing datasets. It aims to facilitate data collection and sharing among roboticists by providing Python-first abstractions that are modular, extensible, and applicable to a wide range of tasks. The toolkit supports asynchronous and remote thread-safe agent execution for maximal responsiveness and scalability, and is compatible with various APIs like HuggingFace Spaces, Datasets, Gymnasium Spaces, Ollama, and OpenAI. It also offers automatic dataset recording and optional uploads to the HuggingFace hub.
banks
Banks is a linguist professor tool that helps generate meaningful LLM prompts using a template language. It provides a user-friendly way to create prompts for various tasks such as blog writing, summarizing documents, lemmatizing text, and generating text using a LLM. The tool supports async operations and comes with predefined filters for data processing. Banks leverages Jinja's macro system to create prompts and interact with OpenAI API for text generation. It also offers a cache mechanism to avoid regenerating text for the same template and context.
HippoRAG
HippoRAG is a novel retrieval augmented generation (RAG) framework inspired by the neurobiology of human long-term memory that enables Large Language Models (LLMs) to continuously integrate knowledge across external documents. It provides RAG systems with capabilities that usually require a costly and high-latency iterative LLM pipeline for only a fraction of the computational cost. The tool facilitates setting up retrieval corpus, indexing, and retrieval processes for LLMs, offering flexibility in choosing different online LLM APIs or offline LLM deployments through LangChain integration. Users can run retrieval on pre-defined queries or integrate directly with the HippoRAG API. The tool also supports reproducibility of experiments and provides data, baselines, and hyperparameter tuning scripts for research purposes.
clarifai-python-grpc
This is the official Clarifai gRPC Python client for interacting with their recognition API. Clarifai offers a platform for data scientists, developers, researchers, and enterprises to utilize artificial intelligence for image, video, and text analysis through computer vision and natural language processing. The client allows users to authenticate, predict concepts in images, and access various functionalities provided by the Clarifai API. It follows a versioning scheme that aligns with the backend API updates and includes specific instructions for installation and troubleshooting. Users can explore the Clarifai demo, sign up for an account, and refer to the documentation for detailed information.
experts
Experts.js is a tool that simplifies the creation and deployment of OpenAI's Assistants, allowing users to link them together as Tools to create a Panel of Experts system with expanded memory and attention to detail. It leverages the new Assistants API from OpenAI, which offers advanced features such as referencing attached files & images as knowledge sources, supporting instructions up to 256,000 characters, integrating with 128 tools, and utilizing the Vector Store API for efficient file search. Experts.js introduces Assistants as Tools, enabling the creation of Multi AI Agent Systems where each Tool is an LLM-backed Assistant that can take on specialized roles or fulfill complex tasks.
ReST-MCTS
ReST-MCTS is a reinforced self-training approach that integrates process reward guidance with tree search MCTS to collect higher-quality reasoning traces and per-step value for training policy and reward models. It eliminates the need for manual per-step annotation by estimating the probability of steps leading to correct answers. The inferred rewards refine the process reward model and aid in selecting high-quality traces for policy model self-training.
neocodeium
NeoCodeium is a free AI completion plugin powered by Codeium, designed for Neovim users. It aims to provide a smoother experience by eliminating flickering suggestions and allowing for repeatable completions using the `.` key. The plugin offers performance improvements through cache techniques, displays suggestion count labels, and supports Lua scripting. Users can customize keymaps, manage suggestions, and interact with the AI chat feature. NeoCodeium enhances code completion in Neovim, making it a valuable tool for developers seeking efficient coding assistance.
extractor
Extractor is an AI-powered data extraction library for Laravel that leverages OpenAI's capabilities to effortlessly extract structured data from various sources, including images, PDFs, and emails. It features a convenient wrapper around OpenAI Chat and Completion endpoints, supports multiple input formats, includes a flexible Field Extractor for arbitrary data extraction, and integrates with Textract for OCR functionality. Extractor utilizes JSON Mode from the latest GPT-3.5 and GPT-4 models, providing accurate and efficient data extraction.
For similar tasks
instructor-php
Instructor for PHP is a library designed for structured data extraction in PHP, powered by Large Language Models (LLMs). It simplifies the process of extracting structured, validated data from unstructured text or chat sequences. Instructor enhances workflow by providing a response model, validation capabilities, and max retries for requests. It supports classes as response models and provides features like partial results, string input, extracting scalar and enum values, and specifying data models using PHP type hints or DocBlock comments. The library allows customization of validation and provides detailed event notifications during request processing. Instructor is compatible with PHP 8.2+ and leverages PHP reflection, Symfony components, and SaloonPHP for communication with LLM API providers.
For similar jobs
weave
Weave is a toolkit for developing Generative AI applications, built by Weights & Biases. With Weave, you can log and debug language model inputs, outputs, and traces; build rigorous, apples-to-apples evaluations for language model use cases; and organize all the information generated across the LLM workflow, from experimentation to evaluations to production. Weave aims to bring rigor, best-practices, and composability to the inherently experimental process of developing Generative AI software, without introducing cognitive overhead.
LLMStack
LLMStack is a no-code platform for building generative AI agents, workflows, and chatbots. It allows users to connect their own data, internal tools, and GPT-powered models without any coding experience. LLMStack can be deployed to the cloud or on-premise and can be accessed via HTTP API or triggered from Slack or Discord.
VisionCraft
The VisionCraft API is a free API for using over 100 different AI models. From images to sound.
kaito
Kaito is an operator that automates the AI/ML inference model deployment in a Kubernetes cluster. It manages large model files using container images, avoids tuning deployment parameters to fit GPU hardware by providing preset configurations, auto-provisions GPU nodes based on model requirements, and hosts large model images in the public Microsoft Container Registry (MCR) if the license allows. Using Kaito, the workflow of onboarding large AI inference models in Kubernetes is largely simplified.
PyRIT
PyRIT is an open access automation framework designed to empower security professionals and ML engineers to red team foundation models and their applications. It automates AI Red Teaming tasks to allow operators to focus on more complicated and time-consuming tasks and can also identify security harms such as misuse (e.g., malware generation, jailbreaking), and privacy harms (e.g., identity theft). The goal is to allow researchers to have a baseline of how well their model and entire inference pipeline is doing against different harm categories and to be able to compare that baseline to future iterations of their model. This allows them to have empirical data on how well their model is doing today, and detect any degradation of performance based on future improvements.
tabby
Tabby is a self-hosted AI coding assistant, offering an open-source and on-premises alternative to GitHub Copilot. It boasts several key features: * Self-contained, with no need for a DBMS or cloud service. * OpenAPI interface, easy to integrate with existing infrastructure (e.g Cloud IDE). * Supports consumer-grade GPUs.
spear
SPEAR (Simulator for Photorealistic Embodied AI Research) is a powerful tool for training embodied agents. It features 300 unique virtual indoor environments with 2,566 unique rooms and 17,234 unique objects that can be manipulated individually. Each environment is designed by a professional artist and features detailed geometry, photorealistic materials, and a unique floor plan and object layout. SPEAR is implemented as Unreal Engine assets and provides an OpenAI Gym interface for interacting with the environments via Python.
Magick
Magick is a groundbreaking visual AIDE (Artificial Intelligence Development Environment) for no-code data pipelines and multimodal agents. Magick can connect to other services and comes with nodes and templates well-suited for intelligent agents, chatbots, complex reasoning systems and realistic characters.