
knowledge
(Chinese Only)Everything I know: DevOps & CloudNative, Linux, Embedded, Homelab, Music, Blockchain, AI, etc...
Stars: 479

This repository serves as a personal knowledge base for the owner's reference and use. It covers a wide range of topics including cloud-native operations, Kubernetes ecosystem, networking, cloud services, telemetry, CI/CD, electronic engineering, hardware projects, operating systems, homelab setups, high-performance computing applications, openwrt router usage, programming languages, music theory, blockchain, distributed systems principles, and various other knowledge domains. The content is periodically refined and published on the owner's blog for maintenance purposes.
README:
个人知识库,主要供自己查阅使用。并不是教程,也不保证正确!
为了维护方便,本仓库的内容可能会在完善后,被整理、润色,再发布到我的博客 https://thiscute.world/ 中。这边将只保留一个链接。
主要内容(大致按内容丰富程度排序):
- 云原生运维相关内容
- Kubernetes 生态:部署、配置、组件及使用笔记
- 网络:Kubernetes 集群网络、Linux 网络(学习中)
- 云服务:AWS/GCP/阿里云/... 使用笔记
- Telemetry:监控(Prometheus+Grafana)、日志(ELK/Loki)、链路追踪
- CI/CD:Jenkins/GitLabCI/ArgoWorkflow 等
- 电子工程(最近兴趣很强,正在爬科技树中)
- 各种芯片/板子的玩法:树莓派、RK3588、STM32、ESP32
- 各种好玩的项目:无人机、智能小车、智能机械臂,甚至机器人
- 操作系统: Linux、NixOS、KVM 虚拟化等
- Homelab: 记录我的 Homelab 玩法
- 硬件配置、网络拓扑、购置时间、购置渠道与价格
- PVE 集群的玩法
- 这些高算力可以用来干啥:K3s 集群、分布式监控、HomeAssistant、NAS、测试云原生领域的各种新项 目...
- openwrt 路由器玩法
- 编程语言学习笔记:Go/Python/C/Rust/...
- 音乐:乐理、口琴/竹笛、歌声合成、编曲(Reaper)
- 区块链、分布式系统及原理
- 机器学习/深度学习(貌似还没开始...)
- 其他各种我有所涉猎的知识
文件夹结构就是文档目录,这里就不额外列索引了—_—
由于众所周知的原因,很多时候我们需要为各种系统、应用、包管理器设置镜像源以加速下载。
主要有如下几个镜像站:
-
阿里云开源镜像站: 个人感觉是国内下载速度最快的一个镜像
源。
- 提供了 ubuntu/debian/centos/alpine,以及 pypi/goproxy 等主流 OS/PL 的镜像源。比较全。
-
清华开源镜像源: 非常全,更新也很及时。
- 但是速度比不上阿里云,而且有时会停机维护。。
- 北京外国语大学镜像站: 清华镜像的姊妹站,因为目前用的人少,感觉速度 比清华源快很多。
-
中科大开源镜像源: 这个也很全,更新也很快。但是不够稳定。
- 比清华源要快一点,但是停机维护的频率更高。而且前段时间因为经费问题还将 pypi 源下线了。
- 腾讯镜像源: 才推出没多久的镜像源,还没用过。
首推北京外国语大学镜像站镜像源,稳定可靠速度快。
DevOps/SRE 领域,基本都可以直接参考 CNCF 蓝 图:CNCF Cloud Native Interactive Landscape
CS 全自学指南(汇集全球最牛逼的各种课程):
偏底层的个人博客(CSAPP 笔记):
- 不周山作品集: 学习知识就像不周山,永远不会有『周全』的一天,是为活到 老,学到老。
系统化的 SRE/DevOps 文档:
SRE/DevOps 文章集锦:
分布式系统设计:
- https://github.com/binhnguyennus/awesome-scalability
- https://github.com/Vonng/ddia
- https://github.com/donnemartin/system-design-primer
企业/团队博客,各个方向的内容都有:
- 极客时间《10x 程序员工作法》
需要学习如何进行高效地团队协作,提高效率。(加更少的班,还能更高质量地完成任务。)
-
领域驱动设计
- 方法:事件风暴
- 人月神话:软件项目管理之道
- 程序员修炼之道
- 人件
-
《关键对话》:掌握沟通的方式
- 时刻注意维护对方的安全感;一定要牢记对话的目的。
- 重构
- MacTalk-池建强的随想录: 极客时间创始人,45+
- 李凡希的 Blog:
- paste-markdown: github 官方出的小工具,将 sheet/table 直接 copy 进来,自动转换为 markdown
- domchristie/turndown: 将整个 html 页面转换为 markdown, 不过对表格的支持好像有点问题
Ryan Yin's Knowledge © by Ryan Yin is licensed under CC BY-SA 4.0
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for knowledge
Similar Open Source Tools

knowledge
This repository serves as a personal knowledge base for the owner's reference and use. It covers a wide range of topics including cloud-native operations, Kubernetes ecosystem, networking, cloud services, telemetry, CI/CD, electronic engineering, hardware projects, operating systems, homelab setups, high-performance computing applications, openwrt router usage, programming languages, music theory, blockchain, distributed systems principles, and various other knowledge domains. The content is periodically refined and published on the owner's blog for maintenance purposes.

aiwechat-vercel
aiwechat-vercel is a tool that integrates AI capabilities into WeChat public accounts using Vercel functions. It requires minimal server setup, low entry barriers, and only needs a domain name that can be bound to Vercel, with almost zero cost. The tool supports various AI models, continuous Q&A sessions, chat functionality, system prompts, and custom commands. It aims to provide a platform for learning and experimentation with AI integration in WeChat public accounts.

dify-plus
Dify-Plus is a project that extends and adds management center functionality to the original Dify project. It includes features such as user quota management, key quota settings, web page login authentication, and more. The project aims to address pain points in enterprise scenarios and is open for collaboration and discussion with the community.

MahjongCopilot
Mahjong Copilot is an AI assistant for the game Mahjong, based on the mjai (Mortal model) bot implementation. It provides step-by-step guidance for each move in the game, and can also be used to automatically play and join games. Mahjong Copilot supports both 3-person and 4-person Mahjong games, and is available in multiple languages.

chocolate-factory
Chocolate Factory is an open-source LLM application development framework designed to help you easily create powerful software development SDLC + LLM assistants. It provides a set of modules for integration into JVM projects and offers RAGScript for querying and local deployment examples. The tool follows a domain-driven problem-solving approach with key concepts like ProblemClarifier, ProblemAnalyzer, SolutionDesigner, SolutionReviewer, and SolutionExecutor. It supports use cases in desktop/IDE, server, and Android development, with a focus on AI-powered coding assistance and semantic search capabilities.

rtp-llm
**rtp-llm** is a Large Language Model (LLM) inference acceleration engine developed by Alibaba's Foundation Model Inference Team. It is widely used within Alibaba Group, supporting LLM service across multiple business units including Taobao, Tmall, Idlefish, Cainiao, Amap, Ele.me, AE, and Lazada. The rtp-llm project is a sub-project of the havenask.

big-AGI
big-AGI is an AI suite designed for professionals seeking function, form, simplicity, and speed. It offers best-in-class Chats, Beams, and Calls with AI personas, visualizations, coding, drawing, side-by-side chatting, and more, all wrapped in a polished UX. The tool is powered by the latest models from 12 vendors and open-source servers, providing users with advanced AI capabilities and a seamless user experience. With continuous updates and enhancements, big-AGI aims to stay ahead of the curve in the AI landscape, catering to the needs of both developers and AI enthusiasts.

IntelliQ
IntelliQ is an open-source project aimed at providing a multi-turn question-answering system based on a large language model (LLM). The system combines advanced intent recognition and slot filling technology to enhance the depth of understanding and accuracy of responses in conversation systems. It offers a flexible and efficient solution for developers to build and optimize various conversational applications. The system features multi-turn dialogue management, intent recognition, slot filling, interface slot technology for real-time data retrieval and processing, adaptive learning for improving response accuracy and speed, and easy integration with detailed API documentation supporting multiple programming languages and platforms.

choco-builder
ChocoBuilder (aka Chocolate Factory) is an open-source LLM application development framework designed to help you easily create powerful software development SDLC + LLM generation assistants. It provides modules for integration into JVM projects, usage with RAGScript, and local deployment examples. ChocoBuilder follows a Domain Driven Problem-Solving design philosophy with key concepts like ProblemClarifier, ProblemAnalyzer, SolutionDesigner, SolutionReviewer, and SolutionExecutor. It offers use cases for desktop/IDE, server, and Android applications, with examples for frontend design, semantic code search, testcase generation, and code interpretation.

RSS-Translator
RSS-Translator is an open-source, simple, and self-deployable tool that allows users to translate titles or content, display in bilingual, subscribe to translated RSS/JSON feeds, support multiple translation engines, control update frequency of translation sources, view translation status, cache all translated content to reduce translation costs, view token/character usage for each source, provide AI content summarization, and retrieve full text. It currently supports various translation engines such as Free Translators, DeepL, OpenAI, ClaudeAI, Azure OpenAI, Google Gemini, Google Translate, Microsoft Translate API, Caiyun API, Moonshot AI, Together AI, OpenRouter AI, Groq, Doubao, OpenL, and Kagi API, with more being added continuously.

pyspur
PySpur is a graph-based editor designed for LLM (Large Language Models) workflows. It offers modular building blocks, node-level debugging, and performance evaluation. The tool is easy to hack, supports JSON configs for workflow graphs, and is lightweight with minimal dependencies. Users can quickly set up PySpur by cloning the repository, creating a .env file, starting docker services, and accessing the portal. PySpur can also work with local models served using Ollama, with steps provided for configuration. The roadmap includes features like canvas, async/batch execution, support for Ollama, new nodes, pipeline optimization, templates, code compilation, multimodal support, and more.

biniou
biniou is a self-hosted webui for various GenAI (generative artificial intelligence) tasks. It allows users to generate multimedia content using AI models and chatbots on their own computer, even without a dedicated GPU. The tool can work offline once deployed and required models are downloaded. It offers a wide range of features for text, image, audio, video, and 3D object generation and modification. Users can easily manage the tool through a control panel within the webui, with support for various operating systems and CUDA optimization. biniou is powered by Huggingface and Gradio, providing a cross-platform solution for AI content generation.

KB-Builder
KB Builder is an open-source knowledge base generation system based on the LLM large language model. It utilizes the RAG (Retrieval-Augmented Generation) data generation enhancement method to provide users with the ability to enhance knowledge generation and quickly build knowledge bases based on RAG. It aims to be the central hub for knowledge construction in enterprises, offering platform-based intelligent dialogue services and document knowledge base management functionality. Users can upload docx, pdf, txt, and md format documents and generate high-quality knowledge base question-answer pairs by invoking large models through the 'Parse Document' feature.

stock-trading
StockTrading AI is a small model stock automatic trading system that integrates with securities platforms, implements automated stock trading, utilizes QuartZ for scheduled tasks to update data daily, employs DL4J framework for LSTM model guidance on stock buying with T+1 short-term trading strategy, utilizes K8S+GithubAction for DevOps, and supports distributed offline training. Future optimizations include obtaining more historical stock data for incremental model training and tuning model hyperparameters to improve price trend prediction accuracy. The system provides various page displays for profit data statistics, trade order queries, stock price viewing, model prediction performance, scheduled task scheduling, and real-time log tracking.

coze-js
Coze-js is a monorepo containing packages for Coze API and Realtime API. It provides usage examples for Node.js and React Web, as well as full console and sample call up demos. The tool requires Node.js 18+, pnpm 9.12.0, and Rush 5.140.0 for installation. Developers can start developing projects within the repository by following the provided steps. Each package in the monorepo can be developed and published independently, with documentation on contributing guidelines and publishing. The tool is licensed under MIT.

LocalAI
LocalAI is a free and open-source OpenAI alternative that acts as a drop-in replacement REST API compatible with OpenAI (Elevenlabs, Anthropic, etc.) API specifications for local AI inferencing. It allows users to run LLMs, generate images, audio, and more locally or on-premises with consumer-grade hardware, supporting multiple model families and not requiring a GPU. LocalAI offers features such as text generation with GPTs, text-to-audio, audio-to-text transcription, image generation with stable diffusion, OpenAI functions, embeddings generation for vector databases, constrained grammars, downloading models directly from Huggingface, and a Vision API. It provides a detailed step-by-step introduction in its Getting Started guide and supports community integrations such as custom containers, WebUIs, model galleries, and various bots for Discord, Slack, and Telegram. LocalAI also offers resources like an LLM fine-tuning guide, instructions for local building and Kubernetes installation, projects integrating LocalAI, and a how-tos section curated by the community. It encourages users to cite the repository when utilizing it in downstream projects and acknowledges the contributions of various software from the community.
For similar tasks

aiac
AIAC is a library and command line tool to generate Infrastructure as Code (IaC) templates, configurations, utilities, queries, and more via LLM providers such as OpenAI, Amazon Bedrock, and Ollama. Users can define multiple 'backends' targeting different LLM providers and environments using a simple configuration file. The tool allows users to ask a model to generate templates for different scenarios and composes an appropriate request to the selected provider, storing the resulting code to a file and/or printing it to standard output.

knowledge
This repository serves as a personal knowledge base for the owner's reference and use. It covers a wide range of topics including cloud-native operations, Kubernetes ecosystem, networking, cloud services, telemetry, CI/CD, electronic engineering, hardware projects, operating systems, homelab setups, high-performance computing applications, openwrt router usage, programming languages, music theory, blockchain, distributed systems principles, and various other knowledge domains. The content is periodically refined and published on the owner's blog for maintenance purposes.

askui
AskUI is a reliable, automated end-to-end automation tool that only depends on what is shown on your screen instead of the technology or platform you are running on.
For similar jobs

AirGo
AirGo is a front and rear end separation, multi user, multi protocol proxy service management system, simple and easy to use. It supports vless, vmess, shadowsocks, and hysteria2.

mosec
Mosec is a high-performance and flexible model serving framework for building ML model-enabled backend and microservices. It bridges the gap between any machine learning models you just trained and the efficient online service API. * **Highly performant** : web layer and task coordination built with Rust 🦀, which offers blazing speed in addition to efficient CPU utilization powered by async I/O * **Ease of use** : user interface purely in Python 🐍, by which users can serve their models in an ML framework-agnostic manner using the same code as they do for offline testing * **Dynamic batching** : aggregate requests from different users for batched inference and distribute results back * **Pipelined stages** : spawn multiple processes for pipelined stages to handle CPU/GPU/IO mixed workloads * **Cloud friendly** : designed to run in the cloud, with the model warmup, graceful shutdown, and Prometheus monitoring metrics, easily managed by Kubernetes or any container orchestration systems * **Do one thing well** : focus on the online serving part, users can pay attention to the model optimization and business logic

llm-code-interpreter
The 'llm-code-interpreter' repository is a deprecated plugin that provides a code interpreter on steroids for ChatGPT by E2B. It gives ChatGPT access to a sandboxed cloud environment with capabilities like running any code, accessing Linux OS, installing programs, using filesystem, running processes, and accessing the internet. The plugin exposes commands to run shell commands, read files, and write files, enabling various possibilities such as running different languages, installing programs, starting servers, deploying websites, and more. It is powered by the E2B API and is designed for agents to freely experiment within a sandboxed environment.

pezzo
Pezzo is a fully cloud-native and open-source LLMOps platform that allows users to observe and monitor AI operations, troubleshoot issues, save costs and latency, collaborate, manage prompts, and deliver AI changes instantly. It supports various clients for prompt management, observability, and caching. Users can run the full Pezzo stack locally using Docker Compose, with prerequisites including Node.js 18+, Docker, and a GraphQL Language Feature Support VSCode Extension. Contributions are welcome, and the source code is available under the Apache 2.0 License.

learn-generative-ai
Learn Cloud Applied Generative AI Engineering (GenEng) is a course focusing on the application of generative AI technologies in various industries. The course covers topics such as the economic impact of generative AI, the role of developers in adopting and integrating generative AI technologies, and the future trends in generative AI. Students will learn about tools like OpenAI API, LangChain, and Pinecone, and how to build and deploy Large Language Models (LLMs) for different applications. The course also explores the convergence of generative AI with Web 3.0 and its potential implications for decentralized intelligence.

gcloud-aio
This repository contains shared codebase for two projects: gcloud-aio and gcloud-rest. gcloud-aio is built for Python 3's asyncio, while gcloud-rest is a threadsafe requests-based implementation. It provides clients for Google Cloud services like Auth, BigQuery, Datastore, KMS, PubSub, Storage, and Task Queue. Users can install the library using pip and refer to the documentation for usage details. Developers can contribute to the project by following the contribution guide.

fluid
Fluid is an open source Kubernetes-native Distributed Dataset Orchestrator and Accelerator for data-intensive applications, such as big data and AI applications. It implements dataset abstraction, scalable cache runtime, automated data operations, elasticity and scheduling, and is runtime platform agnostic. Key concepts include Dataset and Runtime. Prerequisites include Kubernetes version > 1.16, Golang 1.18+, and Helm 3. The tool offers features like accelerating remote file accessing, machine learning, accelerating PVC, preloading dataset, and on-the-fly dataset cache scaling. Contributions are welcomed, and the project is under the Apache 2.0 license with a vendor-neutral approach.

aiges
AIGES is a core component of the Athena Serving Framework, designed as a universal encapsulation tool for AI developers to deploy AI algorithm models and engines quickly. By integrating AIGES, you can deploy AI algorithm models and engines rapidly and host them on the Athena Serving Framework, utilizing supporting auxiliary systems for networking, distribution strategies, data processing, etc. The Athena Serving Framework aims to accelerate the cloud service of AI algorithm models and engines, providing multiple guarantees for cloud service stability through cloud-native architecture. You can efficiently and securely deploy, upgrade, scale, operate, and monitor models and engines without focusing on underlying infrastructure and service-related development, governance, and operations.