Webscout
Search for anything using Google, DuckDuckGo, phind.com, Contains AI models, can transcribe yt videos, temporary email and phone number generation, has TTS support, webai (terminal gpt and open interpreter) and offline LLMs
Stars: 106
WebScout is a versatile tool that allows users to search for anything using Google, DuckDuckGo, and phind.com. It contains AI models, can transcribe YouTube videos, generate temporary email and phone numbers, has TTS support, webai (terminal GPT and open interpreter), and offline LLMs. It also supports features like weather forecasting, YT video downloading, temp mail and number generation, text-to-speech, advanced web searches, and more.
README:
Search for anything using Google, DuckDuckGo, Phind.com, access AI models, transcribe YouTube videos, generate temporary emails and phone numbers, utilize text-to-speech, leverage WebAI (terminal GPT and open interpreter), and explore offline LLMs, and much more!
- Comprehensive Search: Leverage Google, DuckDuckGo, and Phind.com for diverse search results.
- AI Powerhouse: Access and interact with various AI models, including OpenAI, Cohere, and more.
- YouTube Toolkit: Transcribe YouTube videos effortlessly and download audio/video content.
- Tempmail & Temp Number: Generate temporary email addresses and phone numbers for enhanced privacy.
- Text-to-Speech (TTS): Convert text into natural-sounding speech using various TTS providers.
- WebAI: Experience the power of terminal-based GPT and an open interpreter for code execution and more.
- Offline LLMs: Utilize powerful language models offline with GGUF support.
- Extensive Provider Ecosystem: Explore a vast collection of providers, including BasedGPT, DeepSeek, and many others.
- Local LLM Execution: Run GGUF models locally with minimal configuration.
-
Rawdog Scripting: Execute Python scripts directly within your terminal using the
rawdog
feature. - GGUF Conversion & Quantization: Convert and quantize Hugging Face models to GGUF format.
- Autollama: Download Hugging Face models and automatically convert them for Ollama compatibility.
- Function Calling (Beta): Experiment with function calling capabilities for enhanced AI interactions.
pip install -U webscout
python -m webscout --help
Command | Description |
---|---|
python -m webscout answers -k Text | CLI function to perform an answers search using Webscout. |
python -m webscout images -k Text | CLI function to perform an images search using Webscout. |
python -m webscout maps -k Text | CLI function to perform a maps search using Webscout. |
python -m webscout news -k Text | CLI function to perform a news search using Webscout. |
python -m webscout suggestions -k Text | CLI function to perform a suggestions search using Webscout. |
python -m webscout text -k Text | CLI function to perform a text search using Webscout. |
python -m webscout translate -k Text | CLI function to perform translate using Webscout. |
python -m webscout version | A command-line interface command that prints and returns the version of the program. |
python -m webscout videos -k Text | CLI function to perform a videos search using DuckDuckGo API. |
Expand
xa-ar for Arabia
xa-en for Arabia (en)
ar-es for Argentina
au-en for Australia
at-de for Austria
be-fr for Belgium (fr)
be-nl for Belgium (nl)
br-pt for Brazil
bg-bg for Bulgaria
ca-en for Canada
ca-fr for Canada (fr)
ct-ca for Catalan
cl-es for Chile
cn-zh for China
co-es for Colombia
hr-hr for Croatia
cz-cs for Czech Republic
dk-da for Denmark
ee-et for Estonia
fi-fi for Finland
fr-fr for France
de-de for Germany
gr-el for Greece
hk-tzh for Hong Kong
hu-hu for Hungary
in-en for India
id-id for Indonesia
id-en for Indonesia (en)
ie-en for Ireland
il-he for Israel
it-it for Italy
jp-jp for Japan
kr-kr for Korea
lv-lv for Latvia
lt-lt for Lithuania
xl-es for Latin America
my-ms for Malaysia
my-en for Malaysia (en)
mx-es for Mexico
nl-nl for Netherlands
nz-en for New Zealand
no-no for Norway
pe-es for Peru
ph-en for Philippines
ph-tl for Philippines (tl)
pl-pl for Poland
pt-pt for Portugal
ro-ro for Romania
ru-ru for Russia
sg-en for Singapore
sk-sk for Slovak Republic
sl-sl for Slovenia
za-en for South Africa
es-es for Spain
se-sv for Sweden
ch-de for Switzerland (de)
ch-fr for Switzerland (fr)
ch-it for Switzerland (it)
tw-tzh for Taiwan
th-th for Thailand
tr-tr for Turkey
ua-uk for Ukraine
uk-en for United Kingdom
us-en for United States
ue-es for United States (es)
ve-es for Venezuela
vn-vi for Vietnam
wt-wt for No region
from os import rename, getcwd
from webscout import YTdownloader
def download_audio(video_id):
youtube_link = video_id
handler = YTdownloader.Handler(query=youtube_link)
for third_query_data in handler.run(format='mp3', quality='128kbps', limit=1):
audio_path = handler.save(third_query_data, dir=getcwd())
rename(audio_path, "audio.mp3")
def download_video(video_id):
youtube_link = video_id
handler = YTdownloader.Handler(query=youtube_link)
for third_query_data in handler.run(format='mp4', quality='auto', limit=1):
video_path = handler.save(third_query_data, dir=getcwd())
rename(video_path, "video.mp4")
if __name__ == "__main__":
# download_audio("https://www.youtube.com/watch?v=c0tMvzB0OKw")
download_video("https://www.youtube.com/watch?v=c0tMvzB0OKw")
from webscout import weather as w
weather = w.get("Qazigund")
w.print_weather(weather)
from webscout import weather_ascii as w
weather = w.get("Qazigund")
print(weather)
from rich.console import Console
from webscout import tempid
def main():
console = Console()
phone = tempid.TemporaryPhoneNumber()
try:
# Get a temporary phone number for a specific country (or random)
number = phone.get_number(country="Finland")
console.print(f"Your temporary phone number: [bold cyan]{number}[/bold cyan]")
# Pause execution briefly (replace with your actual logic)
# import time module
import time
time.sleep(30) # Adjust the waiting time as needed
# Retrieve and print messages
messages = phone.get_messages(number)
if messages:
# Access individual messages using indexing:
console.print(f"[bold green]{messages[0].frm}:[/] {messages[0].content}")
# (Add more lines if you expect multiple messages)
else:
console.print("No messages received.")
except Exception as e:
console.print(f"[bold red]An error occurred: {e}")
if __name__ == "__main__":
main()
import asyncio
from rich.console import Console
from rich.table import Table
from rich.text import Text
from webscout import tempid
async def main() -> None:
console = Console()
client = tempid.Client()
try:
domains = await client.get_domains()
if not domains:
console.print("[bold red]No domains available. Please try again later.")
return
email = await client.create_email(domain=domains[0].name)
console.print(f"Your temporary email: [bold cyan]{email.email}[/bold cyan]")
console.print(f"Token for accessing the email: [bold cyan]{email.token}[/bold cyan]")
while True:
messages = await client.get_messages(email.email)
if messages is not None:
break
if messages:
table = Table(show_header=True, header_style="bold magenta")
table.add_column("From", style="bold cyan")
table.add_column("Subject", style="bold yellow")
table.add_column("Body", style="bold green")
for message in messages:
body_preview = Text(message.body_text if message.body_text else "No body")
table.add_row(message.email_from or "Unknown", message.subject or "No Subject", body_preview)
console.print(table)
else:
console.print("No messages found.")
except Exception as e:
console.print(f"[bold red]An error occurred: {e}")
finally:
await client.close()
if __name__ == '__main__':
asyncio.run(main())
The transcriber
function in Webscout is a handy tool that transcribes YouTube videos.
Example:
from webscout import YTTranscriber
yt = YTTranscriber()
from rich import print
video_url = input("Enter the YouTube video URL: ")
transcript = yt.get_transcript(video_url, languages=None)
print(transcript)
from webscout import GoogleS
from rich import print
searcher = GoogleS()
results = searcher.search("HelpingAI-9B", max_results=20, extract_text=False, max_text_length=200)
for result in results:
print(result)
from webscout import BingS
from rich import print
searcher = BingS()
results = searcher.search("HelpingAI-9B", max_results=20, extract_webpage_text=True, max_extract_characters=1000)
for result in results:
print(result)
The WEBS
and AsyncWEBS
classes are used to retrieve search results from DuckDuckGo.com.
To use the AsyncWEBS
class, you can perform asynchronous operations using Python's asyncio
library.
To initialize an instance of the WEBS
or AsyncWEBS
classes, you can provide the following optional arguments:
Example - WEBS:
from webscout import WEBS
R = WEBS().text("python programming", max_results=5)
print(R)
Example - AsyncWEBS:
import asyncio
import logging
import sys
from itertools import chain
from random import shuffle
import requests
from webscout import AsyncWEBS
# If you have proxies, define them here
proxies = None
if sys.platform.lower().startswith("win"):
asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy())
def get_words():
word_site = "https://www.mit.edu/~ecprice/wordlist.10000"
resp = requests.get(word_site)
words = resp.text.splitlines()
return words
async def aget_results(word):
async with AsyncWEBS(proxies=proxies) as WEBS:
results = await WEBS.text(word, max_results=None)
return results
async def main():
words = get_words()
shuffle(words)
tasks = [aget_results(word) for word in words[:10]]
results = await asyncio.gather(*tasks)
print(f"Done")
for r in chain.from_iterable(results):
print(r)
logging.basicConfig(level=logging.DEBUG)
await main()
Important Note: The WEBS
and AsyncWEBS
classes should always be used as a context manager (with statement). This ensures proper resource management and cleanup, as the context manager will automatically handle opening and closing the HTTP client connection.
Exceptions:
-
WebscoutE
: Raised when there is a generic exception during the API request.
from webscout import WEBS
# Text search for 'live free or die' using DuckDuckGo.com
with WEBS() as WEBS:
for r in WEBS.text('live free or die', region='wt-wt', safesearch='off', timelimit='y', max_results=10):
print(r)
for r in WEBS.text('live free or die', region='wt-wt', safesearch='off', timelimit='y', max_results=10):
print(r)
from webscout import WEBS
# Instant answers for the query "sun" using DuckDuckGo.com
with WEBS() as WEBS:
for r in WEBS.answers("sun"):
print(r)
from webscout import WEBS
# Image search for the keyword 'butterfly' using DuckDuckGo.com
with WEBS() as WEBS:
keywords = 'butterfly'
WEBS_images_gen = WEBS.images(
keywords,
region="wt-wt",
safesearch="off",
size=None,
type_image=None,
layout=None,
license_image=None,
max_results=10,
)
for r in WEBS_images_gen:
print(r)
from webscout import WEBS
# Video search for the keyword 'tesla' using DuckDuckGo.com
with WEBS() as WEBS:
keywords = 'tesla'
WEBS_videos_gen = WEBS.videos(
keywords,
region="wt-wt",
safesearch="off",
timelimit="w",
resolution="high",
duration="medium",
max_results=10,
)
for r in WEBS_videos_gen:
print(r)
from webscout import WEBS
import datetime
def fetch_news(keywords, timelimit):
news_list = []
with WEBS() as webs_instance:
WEBS_news_gen = webs_instance.news(
keywords,
region="wt-wt",
safesearch="off",
timelimit=timelimit,
max_results=20
)
for r in WEBS_news_gen:
# Convert the date to a human-readable format using datetime
r['date'] = datetime.datetime.fromisoformat(r['date']).strftime('%B %d, %Y')
news_list.append(r)
return news_list
def _format_headlines(news_list, max_headlines: int = 100):
headlines = []
for idx, news_item in enumerate(news_list):
if idx >= max_headlines:
break
new_headline = f"{idx + 1}. {news_item['title'].strip()} "
new_headline += f"(URL: {news_item['url'].strip()}) "
new_headline += f"{news_item['body'].strip()}"
new_headline += "\n"
headlines.append(new_headline)
headlines = "\n".join(headlines)
return headlines
# Example usage
keywords = 'latest AI news'
timelimit = 'd'
news_list = fetch_news(keywords, timelimit)
# Format and print the headlines
formatted_headlines = _format_headlines(news_list)
print(formatted_headlines)
from webscout import WEBS
# Map search for the keyword 'school' in 'anantnag' using DuckDuckGo.com
with WEBS() as WEBS:
for r in WEBS.maps("school", place="anantnag", max_results=50):
print(r)
from webscout import WEBS
# Translation of the keyword 'school' to German ('hi') using DuckDuckGo.com
with WEBS() as WEBS:
keywords = 'school'
r = WEBS.translate(keywords, to="hi")
print(r)
from webscout import WEBS
# Suggestions for the keyword 'fly' using DuckDuckGo.com
with WEBS() as WEBS:
for r in WEBS.suggestions("fly"):
print(r)
from webscout import WEBSX
s = "Python development tools"
result = WEBSX(s)
print(result)
Expand
- Free-mode
- Linux Terminal
- English Translator and Improver
-
position
Interviewer - JavaScript Console
- Excel Sheet
- English Pronunciation Helper
- Spoken English Teacher and Improver
- Travel Guide
- Plagiarism Checker
- Character from Movie/Book/Anything
- Advertiser
- Storyteller
- Football Commentator
- Stand-up Comedian
- Motivational Coach
- Composer
- Debater
- Debate Coach
- Screenwriter
- Novelist
- Movie Critic
- Relationship Coach
- Poet
- Rapper
- Motivational Speaker
- Philosophy Teacher
- Philosopher
- Math Teacher
- AI Writing Tutor
- UX/UI Developer
- Cyber Security Specialist
- Recruiter
- Life Coach
- Etymologist
- Commentariat
- Magician
- Career Counselor
- Pet Behaviorist
- Personal Trainer
- Mental Health Adviser
- Real Estate Agent
- Logistician
- Dentist
- Web Design Consultant
- AI Assisted Doctor
- Doctor
- Accountant
- Chef
- Automobile Mechanic
- Artist Advisor
- Financial Analyst
- Investment Manager
- Tea-Taster
- Interior Decorator
- Florist
- Self-Help Book
- Gnomist
- Aphorism Book
- Text Based Adventure Game
- AI Trying to Escape the Box
- Fancy Title Generator
- Statistician
- Prompt Generator
- Instructor in a School
- SQL terminal
- Dietitian
- Psychologist
- Smart Domain Name Generator
- Tech Reviewer
- Developer Relations consultant
- Academician
- IT Architect
- Lunatic
- Gaslighter
- Fallacy Finder
- Journal Reviewer
- DIY Expert
- Social Media Influencer
- Socrat
- Socratic Method
- Educational Content Creator
- Yogi
- Essay Writer
- Social Media Manager
- Elocutionist
- Scientific Data Visualizer
- Car Navigation System
- Hypnotherapist
- Historian
- Astrologer
- Film Critic
- Classical Music Composer
- Journalist
- Digital Art Gallery Guide
- Public Speaking Coach
- Makeup Artist
- Babysitter
- Tech Writer
- Ascii Artist
- Python interpreter
- Synonym finder
- Personal Shopper
- Food Critic
- Virtual Doctor
- Personal Chef
- Legal Advisor
- Personal Stylist
- Machine Learning Engineer
- Biblical Translator
- SVG designer
- IT Expert
- Chess Player
- Midjourney Prompt Generator
- Fullstack Software Developer
- Mathematician
- Regex Generator
- Time Travel Guide
- Dream Interpreter
- Talent Coach
- R programming Interpreter
- StackOverflow Post
- Emoji Translator
- PHP Interpreter
- Emergency Response Professional
- Fill in the Blank Worksheets Generator
- Software Quality Assurance Tester
- Tic-Tac-Toe Game
- Password Generator
- New Language Creator
- Web Browser
- Senior Frontend Developer
- Solr Search Engine
- Startup Idea Generator
- Spongebob's Magic Conch Shell
- Language Detector
- Salesperson
- Commit Message Generator
- Chief Executive Officer
- Diagram Generator
- Speech-Language Pathologist (SLP)
- Startup Tech Lawyer
- Title Generator for written pieces
- Product Manager
- Drunk Person
- Mathematical History Teacher
- Song Recommender
- Cover Letter
- Technology Transferer
- Unconstrained AI model DAN
- Gomoku player
- Proofreader
- Buddha
- Muslim imam
- Chemical reactor
- Friend
- Python Interpreter
- ChatGPT prompt generator
- Wikipedia page
- Japanese Kanji quiz machine
- note-taking assistant
-
language
Literary Critic - Cheap Travel Ticket Advisor
- DALL-E
- MathBot
- DAN-1
- DAN
- STAN
- DUDE
- Mongo Tom
- LAD
- EvilBot
- NeoGPT
- Astute
- AIM
- CAN
- FunnyGPT
- CreativeGPT
- BetterDAN
- GPT-4
- Wheatley
- Evil Confidant
- DAN 8.6
- Hypothetical response
- BH
- Text Continuation
- Dude v3
- SDA (Superior DAN)
- AntiGPT
- BasedGPT v2
- DevMode + Ranti
- KEVIN
- GPT-4 Simulator
- UCAR
- Dan 8.6
- 3-Liner
- M78
- Maximum
- BasedGPT
- Confronting personalities
- Ron
- UnGPT
- BasedBOB
- AntiGPT v2
- Oppo
- FR3D
- NRAF
- NECO
- MAN
- Eva
- Meanie
- Dev Mode v2
- Evil Chad 2.1
- Universal Jailbreak
- PersonGPT
- BISH
- DAN 11.0
- Aligned
- VIOLET
- TranslatorBot
- JailBreak
- Moralizing Rant
- Mr. Blonde
- New DAN
- GPT-4REAL
- DeltaGPT
- SWITCH
- Jedi Mind Trick
- DAN 9.0
- Dev Mode (Compact)
- OMEGA
- Coach Bobby Knight
- LiveGPT
- DAN Jailbreak
- Cooper
- Steve
- DAN 5.0
- Axies
- OMNI
- Burple
- JOHN
- An Ethereum Developer
- SEO Prompt
- Prompt Enhancer
- Data Scientist
- League of Legends Player
Note: Some "acts" use placeholders like position
or language
which should be replaced with a specific value when using the prompt.
🖼️ Text to Images - DeepInfraImager, PollinationsAI, BlackboxAIImager, AiForceimagger, NexraImager, HFimager, ArtbitImager
Every TTI provider has the same usage code, you just need to change the import.
from webscout import DeepInfraImager
bot = DeepInfraImager()
resp = bot.generate("AI-generated image - webscout", 1)
print(bot.save(resp))
from webscout import Voicepods
voicepods = Voicepods()
text = "Hello, this is a test of the Voicepods text-to-speech"
print("Generating audio...")
audio_file = voicepods.tts(text)
print("Playing audio...")
voicepods.play_audio(audio_file)
from webscout import WEBS as w
R = w().chat("Who are you", model='gpt-4o-mini') # GPT-3.5 Turbo, mixtral-8x7b, llama-3-70b, claude-3-haiku, gpt-4o-mini
print(R)
from webscout import PhindSearch
# Create an instance of the PHIND class
ph = PhindSearch()
# Define a prompt to send to the AI
prompt = "write a essay on phind"
# Use the 'ask' method to send the prompt and receive a response
response = ph.ask(prompt)
# Extract and print the message from the response
message = ph.get_message(response)
print(message)
Using phindv2:
from webscout import Phindv2
# Create an instance of the PHIND class
ph = Phindv2()
# Define a prompt to send to the AI
prompt = ""
# Use the 'ask' method to send the prompt and receive a response
response = ph.ask(prompt)
# Extract and print the message from the response
message = ph.get_message(response)
print(message)
import webscout
from webscout import GEMINI
from rich import print
COOKIE_FILE = "cookies.json"
# Optional: Provide proxy details if needed
PROXIES = {}
# Initialize GEMINI with cookie file and optional proxies
gemini = GEMINI(cookie_file=COOKIE_FILE, proxy=PROXIES)
# Ask a question and print the response
response = gemini.chat("websearch about HelpingAI and who is its developer")
print(response)
from webscout import YEPCHAT
ai = YEPCHAT(Tools=False)
response = ai.chat(input(">>> "))
for chunk in response:
print(chunk, end="", flush=True)
#---------------Tool Call-------------
from rich import print
from webscout import YEPCHAT
def get_current_time():
import datetime
return f"The current time is {datetime.datetime.now().strftime('%H:%M:%S')}"
def get_weather(location: str) -> str:
return f"The weather in {location} is sunny."
ai = YEPCHAT(Tools=True) # Set Tools=True to use tools in the chat.
ai.tool_registry.register_tool("get_current_time", get_current_time, "Gets the current time.")
ai.tool_registry.register_tool(
"get_weather",
get_weather,
"Gets the weather for a given location.",
parameters={
"type": "object",
"properties": {
"location": {"type": "string", "description": "The city and state, or zip code"}
},
"required": ["location"],
},
)
response = ai.chat(input(">>> "))
for chunk in response:
print(chunk, end="", flush=True)
from webscout import BLACKBOXAI
from rich import print
ai = BLACKBOXAI(
is_conversation=True,
max_tokens=800,
timeout=30,
intro=None,
filepath=None,
update_file=True,
proxies={},
history_offset=10250,
act=None,
model=None # You can specify a model if needed
)
# Start an infinite loop for continuous interaction
while True:
# Define a prompt to send to the AI
prompt = input("Enter your prompt: ")
# Check if the user wants to exit the loop
if prompt.lower() == "exit":
break
# Use the 'chat' method to send the prompt and receive a response
r = ai.chat(prompt)
print(r)
from webscout import Perplexity
from rich import print
perplexity = Perplexity()
# Stream the response
response = perplexity.chat(input(">>> "))
for chunk in response:
print(chunk, end="", flush=True)
perplexity.close()
from webscout import Meta
from rich import print
# **For unauthenticated usage**
meta_ai = Meta()
# Simple text prompt
response = meta_ai.chat("What is the capital of France?")
print(response)
# Streaming response
for chunk in meta_ai.chat("Tell me a story about a cat."):
print(chunk, end="", flush=True)
# **For authenticated usage (including image generation)**
fb_email = "[email protected]"
fb_password = "qwertfdsa"
meta_ai = Meta(fb_email=fb_email, fb_password=fb_password)
# Text prompt with web search
response = meta_ai.ask("what is currently happning in bangladesh in aug 2024")
print(response["message"]) # Access the text message
print("Sources:", response["sources"]) # Access sources (if ```python
any)
# Image generation
response = meta_ai.ask("Create an image of a cat wearing a hat.")
print(response["message"]) # Print the text message from the response
for media in response["media"]:
print(media["url"]) # Access image URLs
from webscout import KOBOLDAI
# Instantiate the KOBOLDAI class with default parameters
koboldai = KOBOLDAI()
# Define a prompt to send to the AI
prompt = "What is the capital of France?"
# Use the 'ask' method to get a response from the AI
response = koboldai.ask(prompt)
# Extract and print the message from the response
message = koboldai.get_message(response)
print(message)
from webscout import REKA
a = REKA(is_conversation=True, max_tokens=8000, timeout=30,api_key="")
prompt = "tell me about india"
response_str = a.chat(prompt)
print(response_str)
from webscout import Cohere
a = Cohere(is_conversation=True, max_tokens=8000, timeout=30,api_key="")
prompt = "tell me about india"
response_str = a.chat(prompt)
print(response_str)
from webscout import BasedGPT
# Initialize the BasedGPT provider
basedgpt = BasedGPT(
is_conversation=True, # Chat conversationally
max_tokens=600, # Maximum tokens to generate
timeout=30, # HTTP request timeout
intro="You are a helpful and friendly AI.", # Introductory prompt
filepath="chat_history.txt", # File to store conversation history
update_file=True, # Update the chat history file
)
# Send a prompt to the AI
prompt = "What is the meaning of life?"
response = basedgpt.chat(prompt)
# Print the AI's response
print(response)
from webscout import DeepSeek
from rich import print
ai = DeepSeek(
is_conversation=True,
api_key='cookie',
max_tokens=800,
timeout=30,
intro=None,
filepath=None,
update_file=True,
proxies={},
history_offset=10250,
act=None,
model="deepseek_chat"
)
# Define a prompt to send to the AI
prompt = "Tell me about india"
# Use the 'chat' method to send the prompt and receive a response
r = ai.chat(prompt)
print(r)
from webscout import DeepInfra
ai = DeepInfra(
is_conversation=True,
model= "Qwen/Qwen2-72B-Instruct",
max_tokens=800,
timeout=30,
intro=None,
filepath=None,
update_file=True,
proxies={},
history_offset=10250,
act=None,
)
prompt = "what is meaning of life"
response = ai.ask(prompt)
# Extract and print the message from the response
message = ai.get_message(response)
print(message)
from webscout import GROQ
ai = GROQ(api_key="")
response = ai.chat("What is the meaning of life?")
print(response)
#----------------------TOOL CALL------------------
from webscout import GROQ # Adjust import based on your project structure
from webscout import WEBS
import json
# Initialize the GROQ client
client = GROQ(api_key="")
MODEL = 'llama3-groq-70b-8192-tool-use-preview'
# Function to evaluate a mathematical expression
def calculate(expression):
"""Evaluate a mathematical expression"""
try:
result = eval(expression)
return json.dumps({"result": result})
except Exception as e:
return json.dumps({"error": str(e)})
# Function to perform a text search using DuckDuckGo.com
def search(query):
"""Perform a text search using DuckDuckGo.com"""
try:
results = WEBS().text(query, max_results=5)
return json.dumps({"results": results})
except Exception as e:
return json.dumps({"error": str(e)})
# Add the functions to the provider
client.add_function("calculate", calculate)
client.add_function("search", search)
# Define the tools
tools = [
{
"type": "function",
"function": {
"name": "calculate",
"description": "Evaluate a mathematical expression",
"parameters": {
"type": "object",
"properties": {
"expression": {
"type": "string",
"description": "The mathematical expression to evaluate",
}
},
"required": ["expression"],
},
}
},
{
"type": "function",
"function": {
"name": "search",
"description": "Perform a text search using DuckDuckGo.com and Yep.com",
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The search query to execute",
}
},
"required": ["query"],
},
}
}
]
user_prompt_calculate = "What is 25 * 4 + 10?"
response_calculate = client.chat(user_prompt_calculate, tools=tools)
print(response_calculate)
user_prompt_search = "Find information on HelpingAI and who is its developer"
response_search = client.chat(user_prompt_search, tools=tools)
print(response_search)
from webscout import LLAMA
llama = LLAMA()
r = llama.chat("What is the meaning of life?")
print(r)
from webscout import AndiSearch
a = AndiSearch()
print(a.chat("HelpingAI-9B"))
import json
import logging
from webscout import LLAMA3, WEBS
from webscout.Agents.functioncall import FunctionCallingAgent
# Define tools that the agent can use
tools = [
{
"type": "function",
"function": {
"name": "UserDetail",
"parameters": {
"type": "object",
"title": "UserDetail",
"properties": {
"name": {
"title": "Name",
"type": "string"
},
"age": {
"title": "Age",
"type": "integer"
}
},
"required": ["name", "age"]
}
}
},
{
"type": "function",
"function": {
"name": "web_search",
"description": "Search query on google",
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "web search query"
}
},
"required": ["query"]
}
}
},
{ # New general AI tool
"type": "function",
"function": {
"name": "general_ai",
"description": "Use general AI knowledge to answer the question",
"parameters": {
"type": "object",
"properties": {
"question": {
"type": "string",
"description": "The question to answer"
}
},
"required": ["question"]
}
}
}
]
# Initialize the FunctionCallingAgent with the specified tools
agent = FunctionCallingAgent(tools=tools)
llama = LLAMA3()
from rich import print
# Input message from the user
user = input(">>> ")
message = user
function_call_data = agent.function_call_handler(message)
print(f"Function Call Data: {function_call_data}")
# Check for errors in the function call data
if "error" not in function_call_data:
function_name = function_call_data.get("tool_name") # Use 'tool_name' instead of 'name'
if function_name == "web_search":
arguments = function_call_data.get("tool_input", {}) # Get tool input arguments
query = arguments.get("query")
if query:
with WEBS() as webs:
search_results = webs.text(query, max_results=5)
prompt = (
f"Based on the following search results:\n\n{search_results}\n\n"
f"Question: {user}\n\n"
"Please provide a comprehensive answer to the question based on the search results above. "
"Include relevant webpage URLs in your answer when appropriate. "
"If the search results don't contain relevant information, please state that and provide the best answer you can based on your general knowledge."
)
response = llama.chat(prompt)
for c in response:
print(c, end="", flush=True)
else:
print("Please provide a search query.")
elif function_name == "general_ai": # Handle general AI tool
arguments = function_call_data.get("tool_input", {})
question = arguments.get("question")
if question:
response = llama.chat(question) # Use LLM directly
for c in response:
print(c, end="", flush=True)
else:
print("Please provide a question.")
else:
result = agent.execute_function(function_call_data)
print(f"Function Execution Result: {result}")
else:
print(f"Error: {function_call_data['error']}")
LLAMA3, pizzagpt, RUBIKSAI, Koala, Darkai, AI4Chat, Farfalle, PIAI, Felo, XDASH, Julius, YouChat, YEPCHAT, Cloudflare, TurboSeek, Editee, AI21, Chatify, Cerebras, X0GPT, Lepton, GEMINIAPI, Cleeai, Elmo, Genspark, Upstage, Free2GPT, Bing, DiscordRocks, GPTWeb, AIGameIO, LlamaTutor, PromptRefine, AIUncensored, TutorAI, Bixin, ChatGPTES, Bagoodex, ChatHub, AmigoChat
Code is similar to other providers.
from webscout.LLM import LLM
# Read the system message from the file
with open('system.txt', 'r') as file:
system_message = file.read()
# Initialize the LLM class with the model name and system message
llm = LLM(model="microsoft/WizardLM-2-8x22B", system_message=system_message)
while True:
# Get the user input
user_input = input("User: ")
# Define the messages to be sent
messages = [
{"role": "user", "content": user_input}
]
# Use the mistral_chat method to get the response
response = llm.chat(messages)
# Print the response
print("AI: ", response)
Webscout can now run GGUF models locally. You can download and run your favorite models with minimal configuration.
Example:
from webscout.Local.utils import download_model
from webscout.Local.model import Model
from webscout.Local.thread import Thread
from webscout.Local import formats
# 1. Download the model
repo_id = "microsoft/Phi-3-mini-4k-instruct-gguf" # Replace with the desired Hugging Face repo
filename = "Phi-3-mini-4k-instruct-q4.gguf" # Replace with the correct filename
model_path = download_model(repo_id, filename, token="")
# 2. Load the model
model = Model(model_path, n_gpu_layers=4)
# 3. Create a Thread for conversation
thread = Thread(model, formats.phi3)
# 4. Start interacting with the model
thread.interact()
Webscout's local raw-dog feature allows you to run Python scripts within your terminal prompt.
Example:
import webscout.Local as ws
from webscout.Local.rawdog import RawDog
from webscout.Local.samplers import DefaultSampling
from webscout.Local.formats import chatml, AdvancedFormat
from webscout.Local.utils import download_model
import datetime
import sys
import os
repo_id = "YorkieOH10/granite-8b-code-instruct-Q8_0-GGUF"
filename = "granite-8b-code-instruct.Q8_0.gguf"
model_path = download_model(repo_id, filename, token='')
# Load the model using the downloaded path
model = ws.Model(model_path, n_gpu_layers=10)
rawdog = RawDog()
# Create an AdvancedFormat and modify the system content
# Use a lambda to generate the prompt dynamically:
chat_format = AdvancedFormat(chatml)
# **Pre-format the intro_prompt string:**
system_content = f"""
You are a command-line coding assistant called Rawdog that generates and auto-executes Python scripts.
A typical interaction goes like this:
1. The user gives you a natural language PROMPT.
2. You:
i. Determine what needs to be done
ii. Write a short Python SCRIPT to do it
iii. Communicate back to the user by printing to the console in that SCRIPT
3. The compiler extracts the script and then runs it using exec(). If there will be an exception raised,
it will be send back to you starting with "PREVIOUS SCRIPT EXCEPTION:".
4. In case of exception, regenerate error free script.
If you need to review script outputs before completing the task, you can print the word "CONTINUE" at the end of your SCRIPT.
This can be useful for summarizing documents or technical readouts, reading instructions before
deciding what to do, or other tasks that require multi-step reasoning.
A typical 'CONTINUE' interaction looks like this:
1. The user gives you a natural language PROMPT.
2. You:
i. Determine what needs to be done
ii. Determine that you need to see the output of some subprocess call to complete the task
iii. Write a short Python SCRIPT to print that and then print the word "CONTINUE"
3. The compiler
i. Checks and runs your SCRIPT
ii. Captures the output and appends it to the conversation as "LAST SCRIPT OUTPUT:"
iii. Finds the word "CONTINUE" and sends control back to you
4. You again:
i. Look at the original PROMPT + the "LAST SCRIPT OUTPUT:" to determine what needs to be done
ii. Write a short Python SCRIPT to do it
iii. Communicate back to the user by printing to the console in that SCRIPT
5. The compiler...
Please follow these conventions carefully:
- Decline any tasks that seem dangerous, irreversible, or that you don't understand.
- Always review the full conversation prior to answering and maintain continuity.
- If asked for information, just print the information clearly and concisely.
- If asked to do something, print a concise summary of what you've done as confirmation.
- If asked a question, respond in a friendly, conversational way. Use programmatically-generated and natural language responses as appropriate.
- If you need clarification, return a SCRIPT that prints your question. In the next interaction, continue based on the user's response.
- Assume the user would like something concise. For example rather than printing a massive table, filter or summarize it to what's likely of interest.
- Actively clean up any temporary processes or files you use.
- When looking through files, use git as available to skip files, and skip hidden files (.env, .git, etc) by default.
- You can plot anything with matplotlib.
- ALWAYS Return your SCRIPT inside of a single pair of ``` delimiters. Only the console output of the first such SCRIPT is visible to the user, so make sure that it's complete and don't bother returning anything else.
"""
chat_format.override('system_content', lambda: system_content)
thread = ws.Thread(model, format=chat_format, sampler=DefaultSampling)
while True:
prompt = input(">: ")
if prompt.lower() == "q":
break
response = thread.send(prompt)
# Process the response using RawDog
script_output = rawdog.main(response)
if script_output:
print(script_output)
Webscout provides tools to convert and quantize Hugging Face models into the GGUF format for use with offline LLMs.
Example:
from webscout import gguf
"""
Valid quantization methods:
"q2_k", "q3_k_l", "q3_k_m", "q3_k_s",
"q4_0", "q4_1", "q4_k_m", "q4_k_s",
"q5_0", "q5_1", "q5_k_m", "q5_k_s",
"q6_k", "q8_0"
"""
gguf.convert(
model_id="OEvortex/HelpingAI-Lite-1.5T", # Replace with your model ID
username="Abhaykoul", # Replace with your Hugging Face username
token="hf_token_write", # Replace with your Hugging Face token
quantization_methods="q4_k_m" # Optional, adjust quantization methods
)
Webscout's autollama
utility downloads a model from Hugging Face and then automatically makes it Ollama-ready.
from webscout import autollama
model_path = "Vortex4ai/Jarvis-0.5B"
gguf_file = "test2-q4_k_m.gguf"
autollama.main(model_path, gguf_file)
Command Line Usage:
-
GGUF Conversion:
python -m webscout.Extra.gguf -m "OEvortex/HelpingAI-Lite-1.5T" -u "your_username" -t "your_hf_token" -q "q4_k_m,q5_k_m"
-
Autollama:
python -m webscout.Extra.autollama -m "OEvortex/HelpingAI-Lite-1.5T" -g "HelpingAI-Lite-1.5T.q4_k_m.gguf"
Note:
- Replace
"your_username"
and"your_hf_token"
with your actual Hugging Face credentials. - The
model_path
inautollama
is the Hugging Face model ID, andgguf_file
is the GGUF file ID.
python -m webscout.webai webai --provider "phind" --rawdog
Contributions are welcome! If you'd like to contribute to Webscout, please follow these steps:
- Fork the repository.
- Create a new branch for your feature or bug fix.
- Make your changes and commit them with descriptive messages.
- Push your branch to your forked repository.
- Submit a pull request to the main repository.
This project is licensed under the MIT License - see the LICENSE file for details.
- All the amazing developers who have contributed to the project!
- The open-source community for their support and inspiration.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for Webscout
Similar Open Source Tools
Webscout
WebScout is a versatile tool that allows users to search for anything using Google, DuckDuckGo, and phind.com. It contains AI models, can transcribe YouTube videos, generate temporary email and phone numbers, has TTS support, webai (terminal GPT and open interpreter), and offline LLMs. It also supports features like weather forecasting, YT video downloading, temp mail and number generation, text-to-speech, advanced web searches, and more.
crawl4ai
Crawl4AI is a powerful and free web crawling service that extracts valuable data from websites and provides LLM-friendly output formats. It supports crawling multiple URLs simultaneously, replaces media tags with ALT, and is completely free to use and open-source. Users can integrate Crawl4AI into Python projects as a library or run it as a standalone local server. The tool allows users to crawl and extract data from specified URLs using different providers and models, with options to include raw HTML content, force fresh crawls, and extract meaningful text blocks. Configuration settings can be adjusted in the `crawler/config.py` file to customize providers, API keys, chunk processing, and word thresholds. Contributions to Crawl4AI are welcome from the open-source community to enhance its value for AI enthusiasts and developers.
clarifai-python
The Clarifai Python SDK offers a comprehensive set of tools to integrate Clarifai's AI platform to leverage computer vision capabilities like classification , detection ,segementation and natural language capabilities like classification , summarisation , generation , Q&A ,etc into your applications. With just a few lines of code, you can leverage cutting-edge artificial intelligence to unlock valuable insights from visual and textual content.
instructor
Instructor is a popular Python library for managing structured outputs from large language models (LLMs). It offers a user-friendly API for validation, retries, and streaming responses. With support for various LLM providers and multiple languages, Instructor simplifies working with LLM outputs. The library includes features like response models, retry management, validation, streaming support, and flexible backends. It also provides hooks for logging and monitoring LLM interactions, and supports integration with Anthropic, Cohere, Gemini, Litellm, and Google AI models. Instructor facilitates tasks such as extracting user data from natural language, creating fine-tuned models, managing uploaded files, and monitoring usage of OpenAI models.
instructor
Instructor is a Python library that makes it a breeze to work with structured outputs from large language models (LLMs). Built on top of Pydantic, it provides a simple, transparent, and user-friendly API to manage validation, retries, and streaming responses. Get ready to supercharge your LLM workflows!
langchainrb
Langchain.rb is a Ruby library that makes it easy to build LLM-powered applications. It provides a unified interface to a variety of LLMs, vector search databases, and other tools, making it easy to build and deploy RAG (Retrieval Augmented Generation) systems and assistants. Langchain.rb is open source and available under the MIT License.
e2m
E2M is a Python library that can parse and convert various file types into Markdown format. It supports the conversion of multiple file formats, including doc, docx, epub, html, htm, url, pdf, ppt, pptx, mp3, and m4a. The ultimate goal of the E2M project is to provide high-quality data for Retrieval-Augmented Generation (RAG) and model training or fine-tuning. The core architecture consists of a Parser responsible for parsing various file types into text or image data, and a Converter responsible for converting text or image data into Markdown format.
ai21-python
The AI21 Labs Python SDK is a comprehensive tool for interacting with the AI21 API. It provides functionalities for chat completions, conversational RAG, token counting, error handling, and support for various cloud providers like AWS, Azure, and Vertex. The SDK offers both synchronous and asynchronous usage, along with detailed examples and documentation. Users can quickly get started with the SDK to leverage AI21's powerful models for various natural language processing tasks.
candle-vllm
Candle-vllm is an efficient and easy-to-use platform designed for inference and serving local LLMs, featuring an OpenAI compatible API server. It offers a highly extensible trait-based system for rapid implementation of new module pipelines, streaming support in generation, efficient management of key-value cache with PagedAttention, and continuous batching. The tool supports chat serving for various models and provides a seamless experience for users to interact with LLMs through different interfaces.
funcchain
Funcchain is a Python library that allows you to easily write cognitive systems by leveraging Pydantic models as output schemas and LangChain in the backend. It provides a seamless integration of LLMs into your apps, utilizing OpenAI Functions or LlamaCpp grammars (json-schema-mode) for efficient structured output. Funcchain compiles the Funcchain syntax into LangChain runnables, enabling you to invoke, stream, or batch process your pipelines effortlessly.
llmproxy
llmproxy is a reverse proxy for LLM API based on Cloudflare Worker, supporting platforms like OpenAI, Gemini, and Groq. The interface is compatible with the OpenAI API specification and can be directly accessed using the OpenAI SDK. It provides a convenient way to interact with various AI platforms through a unified API endpoint, enabling seamless integration and usage in different applications.
llm.nvim
llm.nvim is a neovim plugin designed for LLM-assisted programming. It provides a no-frills approach to integrating language model assistance into the coding workflow. Users can configure the plugin to interact with various AI services such as GROQ, OpenAI, and Anthropics. The plugin offers functions to trigger the LLM assistant, create new prompt files, and customize key bindings for seamless interaction. With a focus on simplicity and efficiency, llm.nvim aims to enhance the coding experience by leveraging AI capabilities within the neovim environment.
sparkle
Sparkle is a tool that streamlines the process of building AI-driven features in applications using Large Language Models (LLMs). It guides users through creating and managing agents, defining tools, and interacting with LLM providers like OpenAI. Sparkle allows customization of LLM provider settings, model configurations, and provides a seamless integration with Sparkle Server for exposing agents via an OpenAI-compatible chat API endpoint.
PhoGPT
PhoGPT is an open-source 4B-parameter generative model series for Vietnamese, including the base pre-trained monolingual model PhoGPT-4B and its chat variant, PhoGPT-4B-Chat. PhoGPT-4B is pre-trained from scratch on a Vietnamese corpus of 102B tokens, with an 8192 context length and a vocabulary of 20K token types. PhoGPT-4B-Chat is fine-tuned on instructional prompts and conversations, demonstrating superior performance. Users can run the model with inference engines like vLLM and Text Generation Inference, and fine-tune it using llm-foundry. However, PhoGPT has limitations in reasoning, coding, and mathematics tasks, and may generate harmful or biased responses.
sglang
SGLang is a structured generation language designed for large language models (LLMs). It makes your interaction with LLMs faster and more controllable by co-designing the frontend language and the runtime system. The core features of SGLang include: - **A Flexible Front-End Language**: This allows for easy programming of LLM applications with multiple chained generation calls, advanced prompting techniques, control flow, multiple modalities, parallelism, and external interaction. - **A High-Performance Runtime with RadixAttention**: This feature significantly accelerates the execution of complex LLM programs by automatic KV cache reuse across multiple calls. It also supports other common techniques like continuous batching and tensor parallelism.
For similar tasks
Webscout
WebScout is a versatile tool that allows users to search for anything using Google, DuckDuckGo, and phind.com. It contains AI models, can transcribe YouTube videos, generate temporary email and phone numbers, has TTS support, webai (terminal GPT and open interpreter), and offline LLMs. It also supports features like weather forecasting, YT video downloading, temp mail and number generation, text-to-speech, advanced web searches, and more.
sktime
sktime is a Python library for time series analysis that provides a unified interface for various time series learning tasks such as classification, regression, clustering, annotation, and forecasting. It offers time series algorithms and tools compatible with scikit-learn for building, tuning, and validating time series models. sktime aims to enhance the interoperability and usability of the time series analysis ecosystem by empowering users to apply algorithms across different tasks and providing interfaces to related libraries like scikit-learn, statsmodels, tsfresh, PyOD, and fbprophet.
tafrigh
Tafrigh is a tool for transcribing visual and audio content into text using advanced artificial intelligence techniques provided by OpenAI and wit.ai. It allows direct downloading of content from platforms like YouTube, Facebook, Twitter, and SoundCloud, and provides various output formats such as txt, srt, vtt, csv, tsv, and json. Users can install Tafrigh via pip or by cloning the GitHub repository and using Poetry. The tool supports features like skipping transcription if output exists, specifying playlist items, setting download retries, using different Whisper models, and utilizing wit.ai for transcription. Tafrigh can be used via command line or programmatically, and Docker images are available for easy usage.
subtitler
Subtitles by fframes is a free, local, on-device AI video transcription tool with a user-friendly GUI. It allows users to transcribe video content, edit transcribed cues, style the subtitles, and render them directly onto the video. The tool provides a convenient way to create accurate subtitles for videos without the need for an internet connection.
AI-Youtube-Shorts-Generator
AI Youtube Shorts Generator is a Python tool that utilizes GPT-4 and Whisper to generate engaging YouTube shorts from long-form videos. It downloads videos, transcribes them, extracts highlights, detects speakers, and crops content vertically for shorts. The tool requires Python 3.7 or higher, FFmpeg, and OpenCV. Users can contribute to the project under the MIT License.
For similar jobs
LLMStack
LLMStack is a no-code platform for building generative AI agents, workflows, and chatbots. It allows users to connect their own data, internal tools, and GPT-powered models without any coding experience. LLMStack can be deployed to the cloud or on-premise and can be accessed via HTTP API or triggered from Slack or Discord.
daily-poetry-image
Daily Chinese ancient poetry and AI-generated images powered by Bing DALL-E-3. GitHub Action triggers the process automatically. Poetry is provided by Today's Poem API. The website is built with Astro.
exif-photo-blog
EXIF Photo Blog is a full-stack photo blog application built with Next.js, Vercel, and Postgres. It features built-in authentication, photo upload with EXIF extraction, photo organization by tag, infinite scroll, light/dark mode, automatic OG image generation, a CMD-K menu with photo search, experimental support for AI-generated descriptions, and support for Fujifilm simulations. The application is easy to deploy to Vercel with just a few clicks and can be customized with a variety of environment variables.
SillyTavern
SillyTavern is a user interface you can install on your computer (and Android phones) that allows you to interact with text generation AIs and chat/roleplay with characters you or the community create. SillyTavern is a fork of TavernAI 1.2.8 which is under more active development and has added many major features. At this point, they can be thought of as completely independent programs.
Twitter-Insight-LLM
This project enables you to fetch liked tweets from Twitter (using Selenium), save it to JSON and Excel files, and perform initial data analysis and image captions. This is part of the initial steps for a larger personal project involving Large Language Models (LLMs).
AISuperDomain
Aila Desktop Application is a powerful tool that integrates multiple leading AI models into a single desktop application. It allows users to interact with various AI models simultaneously, providing diverse responses and insights to their inquiries. With its user-friendly interface and customizable features, Aila empowers users to engage with AI seamlessly and efficiently. Whether you're a researcher, student, or professional, Aila can enhance your AI interactions and streamline your workflow.
ChatGPT-On-CS
This project is an intelligent dialogue customer service tool based on a large model, which supports access to platforms such as WeChat, Qianniu, Bilibili, Douyin Enterprise, Douyin, Doudian, Weibo chat, Xiaohongshu professional account operation, Xiaohongshu, Zhihu, etc. You can choose GPT3.5/GPT4.0/ Lazy Treasure Box (more platforms will be supported in the future), which can process text, voice and pictures, and access external resources such as operating systems and the Internet through plug-ins, and support enterprise AI applications customized based on their own knowledge base.
obs-localvocal
LocalVocal is a live-streaming AI assistant plugin for OBS that allows you to transcribe audio speech into text and perform various language processing functions on the text using AI / LLMs (Large Language Models). It's privacy-first, with all data staying on your machine, and requires no GPU, cloud costs, network, or downtime.