cf-proxy-ex
Cloudflare超级代理,无服务器代理,Duckduckgo代理(可用AI聊天,包含GPT4o/Calude3),OpenAI/ChatGPT代理,Github加速,在线代理。Cloudflare super proxy, setting up a free serverless proxy by using Cloudflare worker.
Stars: 234
Cloudflare Proxy EX is a tool that provides Cloudflare super proxy, OpenAI/ChatGPT proxy, Github acceleration, and online proxy services. It allows users to create a worker in Cloudflare website by copying the content from worker.js file, and add their domain name before any URL to use the tool. The tool is an improvement based on gaboolic's cloudflare-reverse-proxy, offering features like removing '/proxy/', handling redirection events, modifying headers, converting relative paths to absolute paths, and more. It aims to enhance proxy functionality and address issues faced by some websites. However, users are advised not to log in to any website through the online proxy due to potential security risks.
README:
💻 在线体验 | ⭐ 用法 | 🚀 快速开始 | 📈 基于原项目的改进 | 🔎 已知问题 | 📸 截图 | 📦 LICENSE | 📄 备注
Cloudflare超级代理,OpenAI/ChatGPT代理,Github加速,在线代理。
https://y.demo.wvusd.homes/https://duckduckgo.com/?t=h_&q=hi&ia=chat
https://y.demo.wvusd.homes/https://www.google.com/maps
- 在cloudflare网站中新建worker,把worker.js文件中的内容复制进去即可使用。
- 在任意网址前面加上https://你的域名/
例如https://你的域名/https://github.com - 本项目基于gaboolic的cloudflare-reverse-proxy
- 登录https://www.cloudflare.com/
- 创建应用程序
- 创建worker(pages麻烦一点,需要写一个package.json文件,但pages的好处是分配的域名直接可以用)
- 点"部署"按钮
- 编辑代码
- 把worker.js文件中的内容复制进去,点"保存并部署"
- (可选) 添加自定义域
- 免费域名申请:https://secure.nom.za/ https://nic.eu.org/ https://nic.ua
- 不需要申请,link域名0元免费1年:https://dynadot.com/
- 域名购买:https://porkbun.com/ https://domain.com/
购买时可以Ctrl + F,搜素$0.
- 去掉
/proxy/,方便使用。我看到有issue说了,但是作者说想添加引导界面,这个问题我也解决了。 - 手动处理转跳事件(3XX),防止一些相对资源加载不出来。
- 判断欲代理的网址是否以
http开头,如果不是就自动加上。 - 把Header里所有有关代理网址的信息全部换成要代理的网站的信息,防止某些网站阻止代理。
- 相对路径全部转换绝对路径,方便加载资源(如JS,CSS等)。
- Cookie作用域修改成仅当代理那个网站时,防止Cookie太大服务器发来400 bad request,同时也防止恶意网站获取所有Cookie。
- 把
XMLHttpRequest和fetch注入返回的HTML,这样也可以提交表单数据。 - 把一个文档监视器注入到返回的HTML,这样有新的链接也可以相对转绝对。
- 修改
Content-Security-Policy和X-Frame-Options的Header,实现可代理Duckduckgo,同时也解决了一些网站打不开的问题。 - 在返回的时候,如果是HTML,那么添加
"Content-Type": "text/html; charset=utf-8",防止一些较为古老的中文网站打开出现锟斤拷,烫烫烫的问题。 - 添加最后访问网址的Cookie,可以解决搜素引擎搜素之后出现异常的情况,如:
https://the proxy/https://www.duckduckgo.com/会转到https://the proxy/?q=key。 - 优化了一些代码。
安全密码利用Cookie,在设置了密码的情况下,会先检测是否有密码Cookie以及是否正确,如果不正确那么直接403。密码Cookie默认名称为passwordCookieName,设置密码可以代码里搜索const password = "";并替换成你的密码。
- 如果原界面同样重写了
XMLHttpRequest和fetch(如Reddit),那么部分请求可能异常
MIT License + 一些条件
其实我犹豫了很久要不要开源,因为之前的开源项目有被人拿去坑人,卖钱,但是又不想让人们重复造轮子,所以决定加入两个条件:
- 凡是使用本项目建立的代理站点,务必备注此开源链接。
- 禁止使用本项目盈利,包括基于本项目的项目。
无规矩难以成方圆。国无法不治,民无法不立。人人守法纪,凡事依法纪,则社会安定,经济发展。倘若没有纪律的规范,失去法度的控制,各项秩序就无从保证,人们生存、发展的环境就会遭到破坏,人民群众就不可能安居乐业。在我国,依法治国的方略正在大力实施,构建社会主义和谐社会已深入人心。我们每个公民更应该懂得遵纪守法的重要性、必要性,做到明纪、知法、守法,推动法治建设和社会和谐。 “以遵纪守法为荣、以违法乱纪为耻”,强调的是一种行为方式,人生准则;唤起的是人的良知,彰显的是人生自律的力量,倡导的是社会主义的法治观、道德观。历史与现实都已反复证明,遵纪守法的本源是道德良知。人要自爱,要爱他人,起码应该做到“不逾矩”,敬畏法度,在法纪允许的范围内活动。因为法律和纪律是为了维护全体人民的共同利益而制定的,神圣庄严,不可违背,不可侵犯,所以才要自觉用法纪来约束自己的行为,对违纪之行、“越轨”之事,不想干、不愿为,从而保持人格气节,创造美好人生。
有道是“法网恢恢,疏而不漏”。那些藐视法纪、践踏法纪的行为,必然受到法纪的惩处。但只要有社会、人群,就难免有违法乱纪行为,就可能会有人去扰乱社会秩序,扰乱安定团结的局面。因此,每一个具有道德良知、法纪意识的公民,都应自觉地拿起法纪这一有力武器,与各种违法乱纪行为作坚决的斗争。大家齐心协力,才能营造出这样一种社会状况:遵纪守法的人感到光荣、自在,得到尊重和支持;违背法纪的行为被斥责、抵制,个别违法乱纪之人被绳之以法。到处井然有序,人人崇尚法治,这样的社会才会令人向往,才充满生机和希望。
- 请不要通过在线代理登录任何网站。虽然本项目中已经限制了Cookie的作用域,也就是说理论上是可行的,但是非常不建议。像是这个项目原版的代理,它Cookie是全局的。也就是说如果你(通过代理)登录了Github然后访问恶意网站,你的所有Cookie就给你偷走了。
- 由于作者意识到了online proxy的弊端,决定
开辟新赛道,探索新蓝海,不断塑造发展新动能新优势,积极实施新旧动能转换,通过产业链横向整合实现降维打击……写一个客户端模式的cf-proxy,大概和Tor差不多的思路。正在积极开发ing墓前情况良好。
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for cf-proxy-ex
Similar Open Source Tools
cf-proxy-ex
Cloudflare Proxy EX is a tool that provides Cloudflare super proxy, OpenAI/ChatGPT proxy, Github acceleration, and online proxy services. It allows users to create a worker in Cloudflare website by copying the content from worker.js file, and add their domain name before any URL to use the tool. The tool is an improvement based on gaboolic's cloudflare-reverse-proxy, offering features like removing '/proxy/', handling redirection events, modifying headers, converting relative paths to absolute paths, and more. It aims to enhance proxy functionality and address issues faced by some websites. However, users are advised not to log in to any website through the online proxy due to potential security risks.
poco-agent
Poco Agent is a cloud-based tool that provides a secure sandbox environment for running tasks without affecting the host machine. It offers a modern UI with mobile adaptability, easy configuration through Docker, and extensive capabilities with support for MCP protocol and custom skills. Users can run tasks asynchronously and schedule them, even when the web interface is closed. Additional features include a built-in browser for internet research and GitHub repository integration. Poco Agent aims to be a more secure, visually appealing, and user-friendly alternative to OpenClaw.
gitmesh
GitMesh is an AI-powered Git collaboration network designed to address contributor dropout in open source projects. It offers real-time branch-level insights, intelligent contributor-task matching, and automated workflows. The platform transforms complex codebases into clear contribution journeys, fostering engagement through gamified rewards and integration with open source support programs. GitMesh's mascot, Meshy/Mesh Wolf, symbolizes agility, resilience, and teamwork, reflecting the platform's ethos of efficiency and power through collaboration.
celeste-python
Celeste AI is a type-safe, modality/provider-agnostic tool that offers unified interface for various providers like OpenAI, Anthropic, Gemini, Mistral, and more. It supports multiple modalities including text, image, audio, video, and embeddings, with full Pydantic validation and IDE autocomplete. Users can switch providers instantly, ensuring zero lock-in and a lightweight architecture. The tool provides primitives, not frameworks, for clean I/O operations.
FeedCraft
FeedCraft is a powerful tool to process your rss feeds as a middleware. Use it to translate your feed, extract fulltext, emulate browser to render js-heavy page, use llm such as google gemini to generate brief for your rss article, use natural language to filter your rss feed, and more! It is an open-source tool that can be self-deployed and used with any RSS reader. It supports AI-powered processing using Open AI compatible LLMs, custom prompt, saving rules to apply to different RSS sources, portable mode for on-the-go usage, and dock mode for advanced customization of RSS sources and processing parameters.
MirrorFlow
MirrorFlow is an end-to-end toolchain for dialogue data processing, cleaning/extraction, trainable samples generation, fine-tuning/distillation, and usage with evaluation. It supports two main routes: 'Digital Self' for fine-tuning chat records to mimic user expression habits and 'GPT-4o Style Alignment' for aligning output structures, clarification methods, refusal habits, and tool invocation behavior.
llm_model_hub
Model Hub V2 is a one-stop platform for model fine-tuning, deployment, and debugging without code, providing users with a visual interface to quickly validate the effects of fine-tuning various open-source models, facilitating rapid experimentation and decision-making, and lowering the threshold for users to fine-tune large models. For detailed instructions, please refer to the Feishu documentation.
aiotieba
Aiotieba is an asynchronous Python library for interacting with the Tieba API. It provides a comprehensive set of features for working with Tieba, including support for authentication, thread and post management, and image and file uploading. Aiotieba is well-documented and easy to use, making it a great choice for developers who want to build applications that interact with Tieba.
Jarvis
Jarvis is a powerful virtual AI assistant designed to simplify daily tasks through voice command integration. It features automation, device management, and personalized interactions, transforming technology engagement. Built using Python and AI models, it serves personal and administrative needs efficiently, making processes seamless and productive.
ChatPilot
ChatPilot is a chat agent tool that enables AgentChat conversations, supports Google search, URL conversation (RAG), and code interpreter functionality, replicates Kimi Chat (file, drag and drop; URL, send out), and supports OpenAI/Azure API. It is based on LangChain and implements ReAct and OpenAI Function Call for agent Q&A dialogue. The tool supports various automatic tools such as online search using Google Search API, URL parsing tool, Python code interpreter, and enhanced RAG file Q&A with query rewriting support. It also allows front-end and back-end service separation using Svelte and FastAPI, respectively. Additionally, it supports voice input/output, image generation, user management, permission control, and chat record import/export.
genkit-plugins
Community plugins repository for Google Firebase Genkit, containing various plugins for AI APIs and Vector Stores. Developed by The Fire Company, this repository offers plugins like genkitx-anthropic, genkitx-cohere, genkitx-groq, genkitx-mistral, genkitx-openai, genkitx-convex, and genkitx-hnsw. Users can easily install and use these plugins in their projects, with examples provided in the documentation. The repository also showcases products like Fireview and Giftit built using these plugins, and welcomes contributions from the community.
GraphGen
GraphGen is a framework for synthetic data generation guided by knowledge graphs. It enhances supervised fine-tuning for large language models (LLMs) by generating synthetic data based on a fine-grained knowledge graph. The tool identifies knowledge gaps in LLMs, prioritizes generating QA pairs targeting high-value knowledge, incorporates multi-hop neighborhood sampling, and employs style-controlled generation to diversify QA data. Users can use LLaMA-Factory and xtuner for fine-tuning LLMs after data generation.
himarket
HiMarket is an out-of-the-box AI open platform solution that can be used to build enterprise-level AI capability markets and developer ecosystem centers. It consists of three core components tailored to different roles within the enterprise: 1. AI open platform management backend (for administrators/operators) for easy packaging of diverse AI capabilities such as model services, MCP Server, Agent, etc., into standardized 'AI products' in API form with comprehensive documentation and examples for one-click publishing to the portal. 2. AI open platform portal (for developers/internal users) as a 'storefront' for developers to complete registration, create consumers, obtain credentials, browse and subscribe to AI products, test online, and monitor their own call status and costs clearly. 3. AI Gateway: As a subproject of the Higress community, the Higress AI Gateway carries out all AI call authentication, security, flow control, protocol conversion, and observability capabilities.
chatgpt-infinity
ChatGPT Infinity is a free and powerful add-on that makes ChatGPT generate infinite answers on any topic. It offers customizable topic selection, multilingual support, adjustable response interval, and auto-scroll feature for a seamless chat experience.
bitcart
Bitcart is a platform designed for merchants, users, and developers, providing easy setup and usage. It includes various linked repositories for core daemons, admin panel, ready store, Docker packaging, Python library for coins connection, BitCCL scripting language, documentation, and official site. The platform aims to simplify the process for merchants and developers to interact and transact with cryptocurrencies, offering a comprehensive ecosystem for managing transactions and payments.
L3AGI
L3AGI is an open-source tool that enables AI Assistants to collaborate together as effectively as human teams. It provides a robust set of functionalities that empower users to design, supervise, and execute both autonomous AI Assistants and Teams of Assistants. Key features include the ability to create and manage Teams of AI Assistants, design and oversee standalone AI Assistants, equip AI Assistants with the ability to retain and recall information, connect AI Assistants to an array of data sources for efficient information retrieval and processing, and employ curated sets of tools for specific tasks. L3AGI also offers a user-friendly interface, APIs for integration with other systems, and a vibrant community for support and collaboration.
For similar tasks
cf-proxy-ex
Cloudflare Proxy EX is a tool that provides Cloudflare super proxy, OpenAI/ChatGPT proxy, Github acceleration, and online proxy services. It allows users to create a worker in Cloudflare website by copying the content from worker.js file, and add their domain name before any URL to use the tool. The tool is an improvement based on gaboolic's cloudflare-reverse-proxy, offering features like removing '/proxy/', handling redirection events, modifying headers, converting relative paths to absolute paths, and more. It aims to enhance proxy functionality and address issues faced by some websites. However, users are advised not to log in to any website through the online proxy due to potential security risks.
uccl
UCCL is a command-line utility tool designed to simplify the process of converting Unix-style file paths to Windows-style file paths and vice versa. It provides a convenient way for developers and system administrators to handle file path conversions without the need for manual adjustments. With UCCL, users can easily convert file paths between different operating systems, making it a valuable tool for cross-platform development and file management tasks.
For similar jobs
AirGo
AirGo is a front and rear end separation, multi user, multi protocol proxy service management system, simple and easy to use. It supports vless, vmess, shadowsocks, and hysteria2.
mosec
Mosec is a high-performance and flexible model serving framework for building ML model-enabled backend and microservices. It bridges the gap between any machine learning models you just trained and the efficient online service API. * **Highly performant** : web layer and task coordination built with Rust 🦀, which offers blazing speed in addition to efficient CPU utilization powered by async I/O * **Ease of use** : user interface purely in Python 🐍, by which users can serve their models in an ML framework-agnostic manner using the same code as they do for offline testing * **Dynamic batching** : aggregate requests from different users for batched inference and distribute results back * **Pipelined stages** : spawn multiple processes for pipelined stages to handle CPU/GPU/IO mixed workloads * **Cloud friendly** : designed to run in the cloud, with the model warmup, graceful shutdown, and Prometheus monitoring metrics, easily managed by Kubernetes or any container orchestration systems * **Do one thing well** : focus on the online serving part, users can pay attention to the model optimization and business logic
llm-code-interpreter
The 'llm-code-interpreter' repository is a deprecated plugin that provides a code interpreter on steroids for ChatGPT by E2B. It gives ChatGPT access to a sandboxed cloud environment with capabilities like running any code, accessing Linux OS, installing programs, using filesystem, running processes, and accessing the internet. The plugin exposes commands to run shell commands, read files, and write files, enabling various possibilities such as running different languages, installing programs, starting servers, deploying websites, and more. It is powered by the E2B API and is designed for agents to freely experiment within a sandboxed environment.
pezzo
Pezzo is a fully cloud-native and open-source LLMOps platform that allows users to observe and monitor AI operations, troubleshoot issues, save costs and latency, collaborate, manage prompts, and deliver AI changes instantly. It supports various clients for prompt management, observability, and caching. Users can run the full Pezzo stack locally using Docker Compose, with prerequisites including Node.js 18+, Docker, and a GraphQL Language Feature Support VSCode Extension. Contributions are welcome, and the source code is available under the Apache 2.0 License.
learn-generative-ai
Learn Cloud Applied Generative AI Engineering (GenEng) is a course focusing on the application of generative AI technologies in various industries. The course covers topics such as the economic impact of generative AI, the role of developers in adopting and integrating generative AI technologies, and the future trends in generative AI. Students will learn about tools like OpenAI API, LangChain, and Pinecone, and how to build and deploy Large Language Models (LLMs) for different applications. The course also explores the convergence of generative AI with Web 3.0 and its potential implications for decentralized intelligence.
gcloud-aio
This repository contains shared codebase for two projects: gcloud-aio and gcloud-rest. gcloud-aio is built for Python 3's asyncio, while gcloud-rest is a threadsafe requests-based implementation. It provides clients for Google Cloud services like Auth, BigQuery, Datastore, KMS, PubSub, Storage, and Task Queue. Users can install the library using pip and refer to the documentation for usage details. Developers can contribute to the project by following the contribution guide.
fluid
Fluid is an open source Kubernetes-native Distributed Dataset Orchestrator and Accelerator for data-intensive applications, such as big data and AI applications. It implements dataset abstraction, scalable cache runtime, automated data operations, elasticity and scheduling, and is runtime platform agnostic. Key concepts include Dataset and Runtime. Prerequisites include Kubernetes version > 1.16, Golang 1.18+, and Helm 3. The tool offers features like accelerating remote file accessing, machine learning, accelerating PVC, preloading dataset, and on-the-fly dataset cache scaling. Contributions are welcomed, and the project is under the Apache 2.0 license with a vendor-neutral approach.
aiges
AIGES is a core component of the Athena Serving Framework, designed as a universal encapsulation tool for AI developers to deploy AI algorithm models and engines quickly. By integrating AIGES, you can deploy AI algorithm models and engines rapidly and host them on the Athena Serving Framework, utilizing supporting auxiliary systems for networking, distribution strategies, data processing, etc. The Athena Serving Framework aims to accelerate the cloud service of AI algorithm models and engines, providing multiple guarantees for cloud service stability through cloud-native architecture. You can efficiently and securely deploy, upgrade, scale, operate, and monitor models and engines without focusing on underlying infrastructure and service-related development, governance, and operations.



