![windows9x](/statics/github-mark.png)
windows9x
It is windows, but all of the apps are ai generated.
Stars: 111
![screenshot](/screenshots_githubs/SawyerHood-windows9x.jpg)
Windows9X is an experimental operating system that allows users to generate applications on the fly by entering descriptions of programs. It leverages an LLM to create HTML files resembling Windows 98 applications, with access to a limited OS API for file operations, registry interactions, and LLM prompting.
README:
Windows9X is the operating system of future's past. What if Windows 98 could generate any application that you wanted on the fly? You enter the description of a program you want inside of the run dialog and the OS creates that for you on the fly. This is an experiment in end user programming and seeing what it could be like if an OS could write applications for you as you need them.
https://github.com/SawyerHood/windows9x/assets/2380669/19c91f20-cdf5-47f9-b6fc-b7a606b9d7b3
Create a .env
file with the following in the root directory:
ANTHROPIC_API_KEY=<your-anthropic-api-key>
NEXT_PUBLIC_LOCAL_MODE=true
# Optional Replicate API key for icon generation
REPLICATE_API_TOKEN=<your-replicate-api-token>
Note: NEXT_PUBLIC_LOCAL_MODE disables the use of 3rd party services other than your modal provider.
- Install node v21 or greater
- Install bun
- Run
bun install
- Run
bun run dev
- Navigate to
http://localhost:3000
When you enter a description of an application, an LLM is prompted to generate an HTML file that looks like a windows 98 application. This is done by injecting 98.css into the page as we stream in the result. This is rendered inside of an iframe that is rendered inside of a window.
In addition applications have access to a limited OS API that allows for saving/reading files, reading/writing from the registry, and prompting an LLM.
This is the api that applications have access to in case it helps with prompting:
declare global {
// Chat lets you use an LLM to generate a response.
var chat: (
messages: { role: "user" | "assistant" | "system"; content: string }[]
) => Promise<string>;
var registry: Registry;
// If the application supports saving and opening files, register a callback to be called when the user saves/opens the file.
// The format of the file is any plain text format that your application can read. If these are registered the OS will create
// the file picker for you. The operating system will create the file menu for you.
// The callback should return the new content of the file
// Used like this:
// registerOnSave(() => {
// return "new content";
// });
var registerOnSave: (callback: () => string) => void;
// Register a callback to be called when the user opens the file
// The callback should return the content of the file
// Used like this:
// registerOnOpen((content) => {
// console.log(content);
// });
var registerOnOpen: (callback: (content: string) => void) => void;
}
// Uses for the registry:
// - To store user settings
// - To store user data
// - To store user state
// - Interact with the operating system.
//
// If the key can be written by other apps, it should be prefixed with "public_"
interface Registry {
get(key: string): Promise<any>;
set(key: string, value: any): Promise<void>;
delete(key: string): Promise<void>;
listKeys(): Promise<string[]>;
}
I want to thank the following people:
- Scratch - Inspired me by getting Windows 98 to run inside of a VM inside of Websim.
- Nate - Nate and I independently started building the same project. Nate gave me the idea of "chatting with the developer" to change the program.
- Jordan - Make 98.css which is the basis that makes this look like Windows 98.
Here are a few examples of applications that can be created:
Everything in Windows9X is a file you can generate a program to generate another program
https://github.com/SawyerHood/windows9x/assets/2380669/cb6f5189-9276-4526-a517-7d84b3331ba7
Here is an example of an application that is generated that in turn can generate websites.
https://github.com/SawyerHood/windows9x/assets/2380669/7a9a0a32-d52d-4901-8f33-d7e148e8bee3
Creating a natural language SQL prompter
https://github.com/SawyerHood/windows9x/assets/2380669/ab404aa7-7018-4d07-96b4-4678ca388d10
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for windows9x
Similar Open Source Tools
![windows9x Screenshot](/screenshots_githubs/SawyerHood-windows9x.jpg)
windows9x
Windows9X is an experimental operating system that allows users to generate applications on the fly by entering descriptions of programs. It leverages an LLM to create HTML files resembling Windows 98 applications, with access to a limited OS API for file operations, registry interactions, and LLM prompting.
![langchain Screenshot](/screenshots_githubs/brainlid-langchain.jpg)
langchain
LangChain is a framework for developing Elixir applications powered by language models. It enables applications to connect language models to other data sources and interact with the environment. The library provides components for working with language models and off-the-shelf chains for specific tasks. It aims to assist in building applications that combine large language models with other sources of computation or knowledge. LangChain is written in Elixir and is not aimed for parity with the JavaScript and Python versions due to differences in programming paradigms and design choices. The library is designed to make it easy to integrate language models into applications and expose features, data, and functionality to the models.
![agentscript Screenshot](/screenshots_githubs/AgentScript-AI-agentscript.jpg)
agentscript
AgentScript is an open-source framework for building AI agents that think in code. It prompts a language model to generate JavaScript code, which is then executed in a dedicated runtime with resumability, state persistence, and interactivity. The framework allows for abstract task execution without needing to know all the data beforehand, making it flexible and efficient. AgentScript supports tools, deterministic functions, and LLM-enabled functions, enabling dynamic data processing and decision-making. It also provides state management and human-in-the-loop capabilities, allowing for pausing, serialization, and resumption of execution.
![HackBot Screenshot](/screenshots_githubs/morpheuslord-HackBot.jpg)
HackBot
HackBot is an AI-powered cybersecurity chatbot designed to provide accurate answers to cybersecurity-related queries, conduct code analysis, and scan analysis. It utilizes the Meta-LLama2 AI model through the 'LlamaCpp' library to respond coherently. The chatbot offers features like local AI/Runpod deployment support, cybersecurity chat assistance, interactive interface, clear output presentation, static code analysis, and vulnerability analysis. Users can interact with HackBot through a command-line interface and utilize it for various cybersecurity tasks.
![fortuna Screenshot](/screenshots_githubs/awslabs-fortuna.jpg)
fortuna
Fortuna is a library for uncertainty quantification that enables users to estimate predictive uncertainty, assess model reliability, trigger human intervention, and deploy models safely. It provides calibration and conformal methods for pre-trained models in any framework, supports Bayesian inference methods for deep learning models written in Flax, and is designed to be intuitive and highly configurable. Users can run benchmarks and bring uncertainty to production systems with ease.
![llamafile Screenshot](/screenshots_githubs/Mozilla-Ocho-llamafile.jpg)
llamafile
llamafile is a tool that enables users to distribute and run Large Language Models (LLMs) with a single file. It combines llama.cpp with Cosmopolitan Libc to create a framework that simplifies the complexity of LLMs into a single-file executable called a 'llamafile'. Users can run these executable files locally on most computers without the need for installation, making open LLMs more accessible to developers and end users. llamafile also provides example llamafiles for various LLM models, allowing users to try out different LLMs locally. The tool supports multiple CPU microarchitectures, CPU architectures, and operating systems, making it versatile and easy to use.
![ScreenAgent Screenshot](/screenshots_githubs/niuzaisheng-ScreenAgent.jpg)
ScreenAgent
ScreenAgent is a project focused on creating an environment for Visual Language Model agents (VLM Agent) to interact with real computer screens. The project includes designing an automatic control process for agents to interact with the environment and complete multi-step tasks. It also involves building the ScreenAgent dataset, which collects screenshots and action sequences for various daily computer tasks. The project provides a controller client code, configuration files, and model training code to enable users to control a desktop with a large model.
![stagehand Screenshot](/screenshots_githubs/browserbase-stagehand.jpg)
stagehand
Stagehand is an AI web browsing framework that simplifies and extends web automation using three simple APIs: act, extract, and observe. It aims to provide a lightweight, configurable framework without complex abstractions, allowing users to automate web tasks reliably. The tool generates Playwright code based on atomic instructions provided by the user, enabling natural language-driven web automation. Stagehand is open source, maintained by the Browserbase team, and supports different models and model providers for flexibility in automation tasks.
![serverless-pdf-chat Screenshot](/screenshots_githubs/aws-samples-serverless-pdf-chat.jpg)
serverless-pdf-chat
The serverless-pdf-chat repository contains a sample application that allows users to ask natural language questions of any PDF document they upload. It leverages serverless services like Amazon Bedrock, AWS Lambda, and Amazon DynamoDB to provide text generation and analysis capabilities. The application architecture involves uploading a PDF document to an S3 bucket, extracting metadata, converting text to vectors, and using a LangChain to search for information related to user prompts. The application is not intended for production use and serves as a demonstration and educational tool.
![generative-ai-sagemaker-cdk-demo Screenshot](/screenshots_githubs/aws-samples-generative-ai-sagemaker-cdk-demo.jpg)
generative-ai-sagemaker-cdk-demo
This repository showcases how to deploy generative AI models from Amazon SageMaker JumpStart using the AWS CDK. Generative AI is a type of AI that can create new content and ideas, such as conversations, stories, images, videos, and music. The repository provides a detailed guide on deploying image and text generative AI models, utilizing pre-trained models from SageMaker JumpStart. The web application is built on Streamlit and hosted on Amazon ECS with Fargate. It interacts with the SageMaker model endpoints through Lambda functions and Amazon API Gateway. The repository also includes instructions on setting up the AWS CDK application, deploying the stacks, using the models, and viewing the deployed resources on the AWS Management Console.
![nanoPerplexityAI Screenshot](/screenshots_githubs/Yusuke710-nanoPerplexityAI.jpg)
nanoPerplexityAI
nanoPerplexityAI is an open-source implementation of a large language model service that fetches information from Google. It involves a simple architecture where the user query is checked by the language model, reformulated for Google search, and an answer is generated and saved in a markdown file. The tool requires minimal setup and is designed for easy visualization of answers.
![llama-on-lambda Screenshot](/screenshots_githubs/baileytec-labs-llama-on-lambda.jpg)
llama-on-lambda
This project provides a proof of concept for deploying a scalable, serverless LLM Generative AI inference engine on AWS Lambda. It leverages the llama.cpp project to enable the usage of more accessible CPU and RAM configurations instead of limited and expensive GPU capabilities. By deploying a container with the llama.cpp converted models onto AWS Lambda, this project offers the advantages of scale, minimizing cost, and maximizing compute availability. The project includes AWS CDK code to create and deploy a Lambda function leveraging your model of choice, with a FastAPI frontend accessible from a Lambda URL. It is important to note that you will need ggml quantized versions of your model and model sizes under 6GB, as your inference RAM requirements cannot exceed 9GB or your Lambda function will fail.
![neutone_sdk Screenshot](/screenshots_githubs/Neutone-neutone_sdk.jpg)
neutone_sdk
The Neutone SDK is a tool designed for researchers to wrap their own audio models and run them in a DAW using the Neutone Plugin. It simplifies the process by allowing models to be built using PyTorch and minimal Python code, eliminating the need for extensive C++ knowledge. The SDK provides support for buffering inputs and outputs, sample rate conversion, and profiling tools for model performance testing. It also offers examples, notebooks, and a submission process for sharing models with the community.
![claude.vim Screenshot](/screenshots_githubs/pasky-claude.vim.jpg)
claude.vim
Claude.vim is a Vim plugin that integrates Claude, an AI pair programmer, into your Vim workflow. It allows you to chat with Claude about what to build or how to debug problems, and Claude offers opinions, proposes modifications, or even writes code. The plugin provides a chat/instruction-centric interface optimized for human collaboration, with killer features like access to chat history and vimdiff interface. It can refactor code, modify or extend selected pieces of code, execute complex tasks by reading documentation, cloning git repositories, and more. Note that it is early alpha software and expected to rapidly evolve.
![nagato-ai Screenshot](/screenshots_githubs/kenshiro-o-nagato-ai.jpg)
nagato-ai
Nagato-AI is an intuitive AI Agent library that supports multiple LLMs including OpenAI's GPT, Anthropic's Claude, Google's Gemini, and Groq LLMs. Users can create agents from these models and combine them to build an effective AI Agent system. The library is named after the powerful ninja Nagato from the anime Naruto, who can control multiple bodies with different abilities. Nagato-AI acts as a linchpin to summon and coordinate AI Agents for specific missions. It provides flexibility in programming and supports tools like Coordinator, Researcher, Critic agents, and HumanConfirmInputTool.
![llamabot Screenshot](/screenshots_githubs/ericmjl-llamabot.jpg)
llamabot
LlamaBot is a Pythonic bot interface to Large Language Models (LLMs), providing an easy way to experiment with LLMs in Jupyter notebooks and build Python apps utilizing LLMs. It supports all models available in LiteLLM. Users can access LLMs either through local models with Ollama or by using API providers like OpenAI and Mistral. LlamaBot offers different bot interfaces like SimpleBot, ChatBot, QueryBot, and ImageBot for various tasks such as rephrasing text, maintaining chat history, querying documents, and generating images. The tool also includes CLI demos showcasing its capabilities and supports contributions for new features and bug reports from the community.
For similar tasks
![windows9x Screenshot](/screenshots_githubs/SawyerHood-windows9x.jpg)
windows9x
Windows9X is an experimental operating system that allows users to generate applications on the fly by entering descriptions of programs. It leverages an LLM to create HTML files resembling Windows 98 applications, with access to a limited OS API for file operations, registry interactions, and LLM prompting.
![srcbook Screenshot](/screenshots_githubs/srcbookdev-srcbook.jpg)
srcbook
Srcbook is an open-source interactive programming environment for TypeScript that allows users to create, run, and share reproducible programs and ideas. It features AI capabilities for exploring and iterating on ideas, supports exporting to valid markdown format, and enables diagraming with mermaid for rich annotations. Users can locally execute programs through a web interface, powered by Node.js under the Apache2 license.
![airdcpp-webclient Screenshot](/screenshots_githubs/airdcpp-web-airdcpp-webclient.jpg)
airdcpp-webclient
AirDC++ Web Client is a locally installed application designed for flexible file sharing within groups over a local network or the internet. It utilizes the Advanced Direct Connect protocol to create file sharing communities with thousands of users. The application offers a responsive web user interface, allows sharing local directories, searching for shared files, saving files, chatting capabilities, browsing shared directories, extension support, and a web API for HTTP REST and WebSockets.
For similar jobs
![sweep Screenshot](/screenshots_githubs/sweepai-sweep.jpg)
sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.
![teams-ai Screenshot](/screenshots_githubs/microsoft-teams-ai.jpg)
teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.
![ai-guide Screenshot](/screenshots_githubs/Crataco-ai-guide.jpg)
ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.
![classifai Screenshot](/screenshots_githubs/10up-classifai.jpg)
classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.
![chatbot-ui Screenshot](/screenshots_githubs/mckaywrigley-chatbot-ui.jpg)
chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.
![BricksLLM Screenshot](/screenshots_githubs/bricks-cloud-BricksLLM.jpg)
BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students
![uAgents Screenshot](/screenshots_githubs/fetchai-uAgents.jpg)
uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.
![griptape Screenshot](/screenshots_githubs/griptape-ai-griptape.jpg)
griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.