VedAstro
A non-profit, open source project to make Vedic Astrology easily available to all.
Stars: 222
VedAstro is an open-source Vedic astrology tool that provides accurate astrological predictions and data. It offers a user-friendly website, a chat API, an open API, a JavaScript SDK, a Swiss Ephemeris API, and a machine learning table generator. VedAstro is free to use and is constantly being updated with new features and improvements.
README:
- Website --> easy & fast astrology data for normal users
- Chat API --> world's 1st open source Vedic AI Chat bot
-
Open API --> free astrology data for your app or website with a simple
HTTP GET
- JavaScript SDK --> easy to use JS library to simplify API access and use
- Swiss Ephemeris API --> free advanced astronomy data from NASA's JPL Ephemeris
-
Learn Astro Computation --> learn exact math & logic used in astrology via
Free Open Source
code - ML Table Generator --> easily generate large astronomical tables for use in ML/AI Model training and Data Science
- Match Finder --> find your astrologically perfect match in our global database
- Life Predictor --> accurate algorithmic prediction of a human life's past and future
- Build On Top --> import VedAstro code directly into your existing projects
As the sage Parashara imparted the wisdom of the stars freely, unbound by wealth or claim, so too does VedAstro, a gift unencumbered by price or title.
Philosophy's the scaffold we use when we build,
Without it, a mud hut, not a structure fulfilled.
For creating grand codes, like VedAstroβs design,
Philosophyβs essential, its role is divine.
The purpose of VedAstro, we must understand,
Born of joy, in development it must stand.
This project thrives on happiness, pure and bright,
Donβt code till your heart feels the building's delight.
When your fingers canβt keep up, joy fills the air,
Youβll know that your code is placed right with care.
In the universe vast, it finds its own way,
Your work shines with brilliance, come what may.
Below is sample API call result for data related to "Sun" on "30/06/2023" at "Singapore" --> Watch Video Guide --> JS Demo Files --> Demo API Call
"Payload": {
"SwissEphemeris": "{ Longitude = 97.672406549912, Latitude = 2.2248025536827577E-05, DistanceAU = 1.0165940297895264, SpeedLongitude = 0, SpeedLatitude = 0, SpeedDistance = 0 }",
"AbdaBala": "0",
"AspectedByMalefics": "False",
"AyanaBala": "118.071100045034",
"Benefic": "False",
"ChestaBala": "0",
"ConjunctWithMalefics": "True",
"Constellation": "Aridra - 3",
"Debilitated": "False",
"Declination": "23.2284400180136",
"DigBala": "5.222314814814815",
"Drekkana": "Libra",
"DrekkanaBala": "0",
"DrikBala": "4.883715277777782",
"Dwadasamsa": "Scorpio",
"Exalted": "False",
"HoraBala": "60",
"HousePlanetIsIn": "5",
"InKendra": "False",
"IsPlanetInOwnHouse": "False",
"IsPlanetStrongInShadvarga": "False",
"KalaBala": "200.68443337836732",
"KendraBala": "15",
"Malefic": "True",
"MasaBala": "0",
"Moolatrikona": "False",
"Motion": "Direct",
"NaisargikaBala": "60",
"NathonnathaBala": "5.709722222222221",
"Navamsa": "Aquarius",
"NirayanaLongitude": {
"DegreeMinuteSecond": "74Β° 56' 18",
"TotalDegrees": "74.93833333333333"
},
"OchchaBala": "38.35388888888889",
"OjayugmarasyamsaBala": "30",
"PakshaBala": "16.90361111111111",
"PlanetHoraSign": "Leo",
"PlanetsInConjuction": "Mercury",
"ReceivingAspectFrom": "",
"Saptamsa": "Virgo",
"SaptavargajaBala": "91.875",
"SayanaLatitude": {
"DegreeMinuteSecond": "0Β° 0' 0",
"TotalDegrees": "0"
},
"SayanaLongitude": {
"DegreeMinuteSecond": "97Β° 40' 20",
"TotalDegrees": "97.67222222222222"
},
"ShadbalaPinda": "446.02",
"ShadvargaBala": "88.125",
"Sign": {
"Name": "Gemini",
"DegreesIn": {
"DegreeMinuteSecond": "14Β° 56' 17",
"TotalDegrees": "14.938055555555556"
}
},
"SignsPlanetIsAspecting": "Sagittarius",
"Speed": "0.9533156649003025",
"SthanaBala": "175.2288888888889",
"TemporaryFriends": "Venus, Mars, Jupiter",
"Thrimsamsa": "Sagittarius",
"TotalStrength": "446.02",
"TransmittingAspectToHouse": "House11",
"TransmittingAspectToPlanet": "",
"TribhagaBala": "0",
"VaraBala": "0"
}
Anybody who has studied Vedic Astrology knows well how accurate it can be. But also how complex it can get to make accurate predictions. It takes decades of experience to be able make accurate prediction. As such this knowledge only reaches a limited people. This project is an effort to change that. Read More
The first line of code for this project was written in late 2014 at ItΓ€-Pasila. Started as a simple desktop software, with no UI and only text display. With continued support from users, this project has steadily grown to what it is today. Helping people from all over the world. π
Thanks to B.V. Raman and his grandfather B. Suryanarain Rao for pioneering easy to read astrology books. Credit also goes to St. Jean-Baptiste de La Salle for proving the efficacy of free and open work for the benefit of all men...Read More.
This development style celebrates the methodology of chaotic development at the benefit of low cost and fast paced prototyping. Inspired by the concept of "Gonzo Journalism", pioneered by Hunter S. Thompson in the 1970s.
We favour this pattern for the development of VedAstro simply due to the volatile nature of this project. Other development styles like "Waterfall" and "Scrum" are equally good when the need is.
β We want to :
- π try out novel ideas at a heartbeat
- π¬ we want the latest platform
- π° we want it cheap
Hence the "gonzo development" pattern is best suited for this needs.
We would like to introduce in this project a novel UX concept called "Drunk Proofing". The idea is simple. All UI is designed to be operated by an alcoholically intoxicated person aka drunk.
Why? Because this forces the team to make a simple and intuitive UI design. It is all too easy during development to make a complicated UI that only coders understand. But it is far more difficult and rewarding to make the UI intuitive & easy. A "no manuals" and "no brainer" approach to design.
The wisdom of ages, once passed down by word,
Now stored in circuits, rarely heard.
Once this knowledge was held in minds so keen,
Now it's coded in machines unseen.
The human touch, that once gave knowledge birth,
Replaced by algorithms, shaping future's girth.
Philosophy is equivalent to the scaffolding used when constructing a building. You can build without scaffolding it is called a mud hut not a building. Thus philosophy is essential to build a large & complex code structure like VedAstro.
The reason for the existence of VedAstro needs to be understood and kept in mind during development. This is a project born of joy, kept alive by it and as such do not touch the code until your heart is filled with the joy of building beauty with electrons and your fingers can't keep up.
Then you know your code is right, and has a place for it in this universe.
Leslie Choi : Sponsored & believed in the project even when work was only half done.
JetBrains : Gave free "ReSharper License" that made coding life easier.
Just Like & Share our social pages and it'll be a big help already!
If you want to do more than just click "Like" & "Share", then join us.
We're always looking for somebody to improve code. or help with funding.
We discuss & share ideas on astrology and computation. And ways you can integrate VedAstro into your own project.
The main part of the program is the prediction/event generator. It works by combining logic on how to calculate a prediction with data about that prediction. This is done everytime a "Calculate" button is clicked. Below you will see a brief explanation of this process. This method was chosen to easily accommodate the thousands of astrological calculation possibilities.
CREATION OF AN EVENT/PREDICTION
STEP 1
Hard coded event data like name is stored in XML file.
A copy of the event name is stored as Enum to link
Calculator Methods with data from XML.
These static methods are the logic to check
if an event occured. No astro calculation done at this stage.
This is the linking process of the logic and data.
-------+
|
+-----------------+ |
| Event Data (xml)| |
+-----------------+ |
+ |
+------------------+ |
|Event Names (Enum)| +-----> Event Data (Instance)
+------------------+ |
+ |
+------------------+ |
|Calculator Methods| |
+------------------+ |
|
------+
STEP 2
From the above step, list of Event Data is generated.
Is occuring logic of each Event Data is called with time slices,
generated from a start time & end time (inputed at runtime).
An Event is created if IsOccuring is true.
This's a merger of Time and EventData to create an
Event at a specific time. This Event is then used
throughout the progam.
Event Data + Time Range
List List
|
|
|
v
Event List
+--------+ +------------------------+ +------------------+
| User | <------+ | Website | -------------> | API |
| | +------> | - Blazor WebAssembly | <------------- | -Azure Functions |
+--------+ GUI | - Azure Static WebApp | XML | |
| | | |
+------------------------+ +------------------+
- Visual Studio 2022
- Target .Net 8.0
- Fork the project and Checkout in your local.
- Go to history and checkout commit tagged with stable otherwise you might face API to webside mismatch
- Open the project in Visual Studio
- Right click and unload below projects (as you don't need these for general project work)
- API.Python
- Console
- Library.API
- StaticTableGenerator
- Right click on project and 'Build Solution' (If face any issues please post in Slack channel for support)
- If you want to run against server running API you need to set "Website" as "startup project" by right click
- Now from Run menu in toolbar (dark Green Arrow) select "IISExpress" (dont select Website)
- this will open a browser window, you can copy the URL and paste to open in other main browser window so that you can use logged in Google/Facebook Auth
- If you want to run against your local running VedAstro APIs then
- stop the running website in local and then do below steps
- you need to open one more instance of VS2022
- open same project, and now set API as "startup project" for that VS instance
- find local.settings.sample.json file (contact Slack Channel to get these properties - these are sensitive so not need to be checked in)
- and rename it by removing "sample" from the name and it become "local.settings.json"
- Now from Run menu in toolbar (dark Green Arrow) select "API" (not need to select Docker)
- this will open a command window and show APIs initialized (if any error please connect @ Slack Vedastro channel)
- run the website using above steps and then login using your Facebook or Google OAuth
- go to provide and Enable Debug, save (this will instruct the code to look for API in local).
Now you can have fun with VedAstro, ;-) try making horoscope and share your feedback in Slack channel.
all 3 independent, only linked in VS for easy access don't commit local referenced .csproj to Git as it'll be used by CI/CD
- Create a method in EventCalculatorMethods.cs
- Add the name in EventNames.cs
- Add the prediction/event details HoroscopeDataList.xml
- Edit in Genso.Astrology.Library EventTag enum. Change here reflects even in GUI
These are randomly ordered notes on why a feature was implemented in a certain way.
Will prove usefull when debugging & upgrading code.
Shows only clean & nice html index for bots from best known SEs
for direct access Blazor page via static storage without 404 error since no page acctually exists at page url, blazor takes url and runs the page as app using rule engine this is possible rules also make sure not to redirect file & api access only page access
- not begins with "/api/"
- has a path
- Sec-Fetch-Mode = navigate
- Sec-Fetch-Dest = document
web : vedastro.org -> domain registra -> azure DNS -> azure cdn -> web blob storage api stable : api.vedastro.org -> domain registra -> azure DNS -> azure cdn -> stable api server (render) api beta : beta.api.vedastro.org -> domain registra -> azure DNS -> azure cdn -> beta api server (azure) domain cert managed by lets encyrpt acme bot azure func
via Azure CDN Rules Engine, this allows the use of api.vedastro.org/...
& api.vedastro.org/nlp/...
Since not documented by BV. Raman, code here is created through experimentation by repeating relationship between Dasa planet & Bhukti planet.
Not all data regarding an event is hardwired. Generating gochara, antaram, sukshma and others is more effcient if description was created by Astronomical calculator At the moment EventDataList.xml is the source of truth, meaning if an event exists in xml file, then it must exist in code.
- Accessing events chart directly via API generated html
- CORS in Azure Website Storage needs to be disabled for this to work, outside of vedastro.org
The default timezone generated for all svg charts will be based on client timezone. Timezone does not matter when full life charts are made, but will matter alot when short term muhurtha charts are generated. Since most users are not living where they were born, it is only logical to default it client browser's timezone. This timezone must be visible/changeable to users who need to use otherwise.
- This feature is to store notes on the dasa report
- The notes are actualy Events converted to XML and stored inside each person's record
- When rendering these events are placed on top dasa report view
WEBSITE : Why astrological calculation done on API server and not in client (browser) via webassmebly?
- The calculations tested on Intel Xeon with parallel procesing takes about 1GB RAM & 30% CPU. With these loads browsers with mobile CPU's are going to be probelmatic for sure. So as not to waste time, the API route has been decided since it has been proven to work.
- There are places where all Astronomical computation is done in client, exp. Planet Info Box
-
Built on reference to, Hindu Predictive Astrology pg. 254
-
Asthavarga bindus are different from shadbala and it is to be implemented soon.
-
Asthavarga bindus are not yet account for, asthavarga good or bad nature of the planet. It is assumed that Shadbala system can compensate for it.
-
This passage on page 255 needs to be clarified "It must be noted that when passing through the first 10 degrees of a sign, Mars and the Sun produce results."
-
It's intepreted that Vendha is an obstruction and not a reversal of the Gochara results So as for now the design is that if a vedha is present than the result is simply nullified.
-
In Horoscope predictions methods have "time" & "person" arguments available, obvioulsy "time" is not needed, but for sake of semantic similarity with Muhurtha methods this is maintained.
-
Option 1 : generate a high res image (svg/html) and zoom horitontally into it - very fast - image gets blurry
-
Option 2 : Regenerate whole component in Blazor - very slow - hard to implement with touch screen
-
Option 3 : Generate multiple preset zooms, than place them on top of each other, and only make visible what is needed via selector - complicated, needs documentation - easy touch screen implimentation - very fast
Thus Option 3 was chosen.
- Structs are used to reduce overhead from large collections, exp. List<>
- When structs are part of a class, they are stored in the heap. An additional benefit is that structs need less memory than a class because they have no ObjectHeader or MethodTable. You should consider using a struct when the size of the struct will be minimal (say around 16 bytes), the struct will be short-lived, or the struct will be immutable.
- default hashing is inconsistent, MD5 is used
- many class's get hash overrides still use default hashing (in cache mechanism), could result in errors, needs to be updated
- NOTE : all default hashing is instance specific (FOR STRINGS ONLY so far), works as id in 1 enviroment, but with Client + Server config, hashes become different, needs changing to MD5
- In class/struct that only represent data and not computation, use direct property naming without modifiers like "Get" or "Set". Example: Person struct should be "Person.BirthTime" and not "Person.GetBirthTime()"
- 3 files exist now, azure storage, desktop, wwwroot (TODO delete all but wwwroot)
- 2 of these files exist, 1 local in MuhurthaCore for desktop version. The other online in VedAstro Azure storage for use by API. Both files need to be in sync, if forgot to sync. Use file with latest update.
- Future todo simplify into 1 file. Local MuhurthaCore can be deprecated.
- Generally 1 tag for 1 event, add only when needed.
- Multiple tags can be used by 1 event, separated by "," in in the Tag element
- Done so that event can be accessed for multiple uses. Example, Tarabala Events is taged for Personal & Tarabala.
- Needs to be added with care and where absolutely needed, else could get very confusing.
To all those who say we need money todo good. Jesus said not.
"It is easier for a camel to go through the eye of a needle, than for a rich man to enter the kingdom of God"
Oh so bright, On a Tuesday morning,
I'm pondering life, and what's in sight.
Is it fear, fate, justice, or a test of might?
From my father's voice rings a resounding insight.
Joy of my love, it's your guiding light!
All men that have joy, have God, just right,
Making love to their sweet wife, there, God's in sight!
When men love their wives, with all their heart,
They see a glimpse of God, a work of art.
In those precious moments, they see God's might.
Yet, swiftly it fades, like a star in the night
When fleeting moments pass, and cries are heard,
And we're left to wonder, if joy's been blurred.
Chasing worldly delights, may bring us cheer
But joy is what lasts, and banishes all fear.
To pursue worldly pleasures, is not quite right,
It's short-sighted, like a bat in the daylight.
They seek joy, in their ceaseless flight,
Forgetting it's joy that makes their wings ignite.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for VedAstro
Similar Open Source Tools
VedAstro
VedAstro is an open-source Vedic astrology tool that provides accurate astrological predictions and data. It offers a user-friendly website, a chat API, an open API, a JavaScript SDK, a Swiss Ephemeris API, and a machine learning table generator. VedAstro is free to use and is constantly being updated with new features and improvements.
griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.
CoPilot
TigerGraph CoPilot is an AI assistant that combines graph databases and generative AI to enhance productivity across various business functions. It includes three core component services: InquiryAI for natural language assistance, SupportAI for knowledge Q&A, and QueryAI for GSQL code generation. Users can interact with CoPilot through a chat interface on TigerGraph Cloud and APIs. CoPilot requires LLM services for beta but will support TigerGraph's LLM in future releases. It aims to improve contextual relevance and accuracy of answers to natural-language questions by building knowledge graphs and using RAG. CoPilot is extensible and can be configured with different LLM providers, graph schemas, and LangChain tools.
Tools4AI
Tools4AI is a Java-based Agentic Framework for building AI agents to integrate with enterprise Java applications. It enables the conversion of natural language prompts into actionable behaviors, streamlining user interactions with complex systems. By leveraging AI capabilities, it enhances productivity and innovation across diverse applications. The framework allows for seamless integration of AI with various systems, such as customer service applications, to interpret user requests, trigger actions, and streamline workflows. Prompt prediction anticipates user actions based on input prompts, enhancing user experience by proactively suggesting relevant actions or services based on context.
ai-dev-2024-ml-workshop
The 'ai-dev-2024-ml-workshop' repository contains materials for the Deploy and Monitor ML Pipelines workshop at the AI_dev 2024 conference in Paris, focusing on deployment designs of machine learning pipelines using open-source applications and free-tier tools. It demonstrates automating data refresh and forecasting using GitHub Actions and Docker, monitoring with MLflow and YData Profiling, and setting up a monitoring dashboard with Quarto doc on GitHub Pages.
Trinity
Trinity is an Explainable AI (XAI) Analysis and Visualization tool designed for Deep Learning systems or other models performing complex classification or decoding. It provides performance analysis through interactive 3D projections that are hyper-dimensional aware, allowing users to explore hyperspace, hypersurface, projections, and manifolds. Trinity primarily works with JSON data formats and supports the visualization of FeatureVector objects. Users can analyze and visualize data points, correlate inputs with classification results, and create custom color maps for better data interpretation. Trinity has been successfully applied to various use cases including Deep Learning Object detection models, COVID gene/tissue classification, Brain Computer Interface decoders, and Large Language Model (ChatGPT) Embeddings Analysis.
DistillKit
DistillKit is an open-source research effort by Arcee.AI focusing on model distillation methods for Large Language Models (LLMs). It provides tools for improving model performance and efficiency through logit-based and hidden states-based distillation methods. The tool supports supervised fine-tuning and aims to enhance the adoption of open-source LLM distillation techniques.
terraform-provider-aiven
The Terraform provider for Aiven.io, an open source data platform as a service. See the official documentation to learn about all the possible services and resources.
AutoNode
AutoNode is a self-operating computer system designed to automate web interactions and data extraction processes. It leverages advanced technologies like OCR (Optical Character Recognition), YOLO (You Only Look Once) models for object detection, and a custom site-graph to navigate and interact with web pages programmatically. Users can define objectives, create site-graphs, and utilize AutoNode via API to automate tasks on websites. The tool also supports training custom YOLO models for object detection and OCR for text recognition on web pages. AutoNode can be used for tasks such as extracting product details, automating web interactions, and more.
talking-avatar-with-ai
The 'talking-avatar-with-ai' project is a digital human system that utilizes OpenAI's GPT-3 for generating responses, Whisper for audio transcription, Eleven Labs for voice generation, and Rhubarb Lip Sync for lip synchronization. The system allows users to interact with a digital avatar that responds with text, facial expressions, and animations, creating a realistic conversational experience. The project includes setup for environment variables, chat prompt templates, chat model configuration, and structured output parsing to enhance the interaction with the digital human.
zep
Zep is a long-term memory service for AI Assistant apps. With Zep, you can provide AI assistants with the ability to recall past conversations, no matter how distant, while also reducing hallucinations, latency, and cost. Zep persists and recalls chat histories, and automatically generates summaries and other artifacts from these chat histories. It also embeds messages and summaries, enabling you to search Zep for relevant context from past conversations. Zep does all of this asyncronously, ensuring these operations don't impact your user's chat experience. Data is persisted to database, allowing you to scale out when growth demands. Zep also provides a simple, easy to use abstraction for document vector search called Document Collections. This is designed to complement Zep's core memory features, but is not designed to be a general purpose vector database. Zep allows you to be more intentional about constructing your prompt: 1. automatically adding a few recent messages, with the number customized for your app; 2. a summary of recent conversations prior to the messages above; 3. and/or contextually relevant summaries or messages surfaced from the entire chat session. 4. and/or relevant Business data from Zep Document Collections.
empower-functions
Empower Functions is a family of large language models (LLMs) that provide GPT-4 level capabilities for real-world 'tool using' use cases. These models offer compatibility support to be used as drop-in replacements, enabling interactions with external APIs by recognizing when a function needs to be called and generating JSON containing necessary arguments based on user inputs. This capability is crucial for building conversational agents and applications that convert natural language into API calls, facilitating tasks such as weather inquiries, data extraction, and interactions with knowledge bases. The models can handle multi-turn conversations, choose between tools or standard dialogue, ask for clarification on missing parameters, integrate responses with tool outputs in a streaming fashion, and efficiently execute multiple functions either in parallel or sequentially with dependencies.
local-talking-llm
The 'local-talking-llm' repository provides a tutorial on building a voice assistant similar to Jarvis or Friday from Iron Man movies, capable of offline operation on a computer. The tutorial covers setting up a Python environment, installing necessary libraries like rich, openai-whisper, suno-bark, langchain, sounddevice, pyaudio, and speechrecognition. It utilizes Ollama for Large Language Model (LLM) serving and includes components for speech recognition, conversational chain, and speech synthesis. The implementation involves creating a TextToSpeechService class for Bark, defining functions for audio recording, transcription, LLM response generation, and audio playback. The main application loop guides users through interactive voice-based conversations with the assistant.
invariant
Invariant Analyzer is an open-source scanner designed for LLM-based AI agents to find bugs, vulnerabilities, and security threats. It scans agent execution traces to identify issues like looping behavior, data leaks, prompt injections, and unsafe code execution. The tool offers a library of built-in checkers, an expressive policy language, data flow analysis, real-time monitoring, and extensible architecture for custom checkers. It helps developers debug AI agents, scan for security violations, and prevent security issues and data breaches during runtime. The analyzer leverages deep contextual understanding and a purpose-built rule matching engine for security policy enforcement.
superpipe
Superpipe is a lightweight framework designed for building, evaluating, and optimizing data transformation and data extraction pipelines using LLMs. It allows users to easily combine their favorite LLM libraries with Superpipe's building blocks to create pipelines tailored to their unique data and use cases. The tool facilitates rapid prototyping, evaluation, and optimization of end-to-end pipelines for tasks such as classification and evaluation of job departments based on work history. Superpipe also provides functionalities for evaluating pipeline performance, optimizing parameters for cost, accuracy, and speed, and conducting grid searches to experiment with different models and prompts.
neo4j-graphrag-python
The Neo4j GraphRAG package for Python is an official repository that provides features for creating and managing vector indexes in Neo4j databases. It aims to offer developers a reliable package with long-term commitment, maintenance, and fast feature updates. The package supports various Python versions and includes functionalities for creating vector indexes, populating them, and performing similarity searches. It also provides guidelines for installation, examples, and development processes such as installing dependencies, making changes, and running tests.
For similar tasks
VedAstro
VedAstro is an open-source Vedic astrology tool that provides accurate astrological predictions and data. It offers a user-friendly website, a chat API, an open API, a JavaScript SDK, a Swiss Ephemeris API, and a machine learning table generator. VedAstro is free to use and is constantly being updated with new features and improvements.
For similar jobs
VedAstro
VedAstro is an open-source Vedic astrology tool that provides accurate astrological predictions and data. It offers a user-friendly website, a chat API, an open API, a JavaScript SDK, a Swiss Ephemeris API, and a machine learning table generator. VedAstro is free to use and is constantly being updated with new features and improvements.
psychic
Psychic is a tool that provides a platform for users to access psychic readings and services. It offers a range of features such as tarot card readings, astrology consultations, and spiritual guidance. Users can connect with experienced psychics and receive personalized insights and advice on various aspects of their lives. The platform is designed to be user-friendly and intuitive, making it easy for users to navigate and explore the different services available. Whether you're looking for guidance on love, career, or personal growth, Psychic has you covered.
weave
Weave is a toolkit for developing Generative AI applications, built by Weights & Biases. With Weave, you can log and debug language model inputs, outputs, and traces; build rigorous, apples-to-apples evaluations for language model use cases; and organize all the information generated across the LLM workflow, from experimentation to evaluations to production. Weave aims to bring rigor, best-practices, and composability to the inherently experimental process of developing Generative AI software, without introducing cognitive overhead.
agentcloud
AgentCloud is an open-source platform that enables companies to build and deploy private LLM chat apps, empowering teams to securely interact with their data. It comprises three main components: Agent Backend, Webapp, and Vector Proxy. To run this project locally, clone the repository, install Docker, and start the services. The project is licensed under the GNU Affero General Public License, version 3 only. Contributions and feedback are welcome from the community.
oss-fuzz-gen
This framework generates fuzz targets for real-world `C`/`C++` projects with various Large Language Models (LLM) and benchmarks them via the `OSS-Fuzz` platform. It manages to successfully leverage LLMs to generate valid fuzz targets (which generate non-zero coverage increase) for 160 C/C++ projects. The maximum line coverage increase is 29% from the existing human-written targets.
LLMStack
LLMStack is a no-code platform for building generative AI agents, workflows, and chatbots. It allows users to connect their own data, internal tools, and GPT-powered models without any coding experience. LLMStack can be deployed to the cloud or on-premise and can be accessed via HTTP API or triggered from Slack or Discord.
VisionCraft
The VisionCraft API is a free API for using over 100 different AI models. From images to sound.
kaito
Kaito is an operator that automates the AI/ML inference model deployment in a Kubernetes cluster. It manages large model files using container images, avoids tuning deployment parameters to fit GPU hardware by providing preset configurations, auto-provisions GPU nodes based on model requirements, and hosts large model images in the public Microsoft Container Registry (MCR) if the license allows. Using Kaito, the workflow of onboarding large AI inference models in Kubernetes is largely simplified.