AirBnB_clone_v2
None
Stars: 98
The AirBnB Clone - The Console project is the first segment of the AirBnB project at Holberton School, aiming to cover fundamental concepts of higher level programming. The goal is to deploy a server as a simple copy of the AirBnB Website (HBnB). The project includes a command interpreter to manage objects for the AirBnB website, allowing users to create new objects, retrieve objects, perform operations on objects, update object attributes, and destroy objects. The project is interpreted/tested on Ubuntu 14.04 LTS using Python 3.4.3.
README:
The console is the first segment of the AirBnB project at Holberton School that will collectively cover fundamental concepts of higher level programming. The goal of AirBnB project is to eventually deploy our server a simple copy of the AirBnB Website(HBnB). A command interpreter is created in this segment to manage objects for the AirBnB(HBnB) website.
- Create a new object (ex: a new User or a new Place)
- Retrieve an object from a file, a database etc...
- Do operations on objects (count, compute stats, etc...)
- Update attributes of an object
- Destroy an object
This project is interpreted/tested on Ubuntu 14.04 LTS using python3 (version 3.4.3)
- Clone this repository:
git clone "https://github.com/alexaorrico/AirBnB_clone.git"
- Access AirBnb directory:
cd AirBnB_clone
- Run hbnb(interactively):
./console
and enter command - Run hbnb(non-interactively):
echo "<command>" | ./console.py
console.py - the console contains the entry point of the command interpreter. List of commands this console current supports:
-
EOF
- exits console -
quit
- exits console -
<emptyline>
- overwrites default emptyline method and does nothing -
create
- Creates a new instance ofBaseModel
, saves it (to the JSON file) and prints the id -
destroy
- Deletes an instance based on the class name and id (save the change into the JSON file). -
show
- Prints the string representation of an instance based on the class name and id. -
all
- Prints all string representation of all instances based or not on the class name. -
update
- Updates an instance based on the class name and id by adding or updating attribute (save the change into the JSON file).
base_model.py - The BaseModel class from which future classes will be derived
-
def __init__(self, *args, **kwargs)
- Initialization of the base model -
def __str__(self)
- String representation of the BaseModel class -
def save(self)
- Updates the attributeupdated_at
with the current datetime -
def to_dict(self)
- returns a dictionary containing all keys/values of the instance
Classes inherited from Base Model:
/models/engine
directory contains File Storage class that handles JASON serialization and deserialization :
file_storage.py - serializes instances to a JSON file & deserializes back to instances
-
def all(self)
- returns the dictionary __objects -
def new(self, obj)
- sets in __objects the obj with key .id -
def save(self)
- serializes __objects to the JSON file (path: __file_path) -
def reload(self)
- deserializes the JSON file to __objects
/test_models/test_base_model.py - Contains the TestBaseModel and TestBaseModelDocs classes TestBaseModelDocs class:
-
def setUpClass(cls)
- Set up for the doc tests -
def test_pep8_conformance_base_model(self)
- Test that models/base_model.py conforms to PEP8 -
def test_pep8_conformance_test_base_model(self)
- Test that tests/test_models/test_base_model.py conforms to PEP8 -
def test_bm_module_docstring(self)
- Test for the base_model.py module docstring -
def test_bm_class_docstring(self)
- Test for the BaseModel class docstring -
def test_bm_func_docstrings(self)
- Test for the presence of docstrings in BaseModel methods
TestBaseModel class:
-
def test_is_base_model(self)
- Test that the instatiation of a BaseModel works -
def test_created_at_instantiation(self)
- Test created_at is a pub. instance attribute of type datetime -
def test_updated_at_instantiation(self)
- Test updated_at is a pub. instance attribute of type datetime -
def test_diff_datetime_objs(self)
- Test that two BaseModel instances have different datetime objects
/test_models/test_amenity.py - Contains the TestAmenityDocs class:
-
def setUpClass(cls)
- Set up for the doc tests -
def test_pep8_conformance_amenity(self)
- Test that models/amenity.py conforms to PEP8 -
def test_pep8_conformance_test_amenity(self)
- Test that tests/test_models/test_amenity.py conforms to PEP8 -
def test_amenity_module_docstring(self)
- Test for the amenity.py module docstring -
def test_amenity_class_docstring(self)
- Test for the Amenity class docstring
/test_models/test_city.py - Contains the TestCityDocs class:
-
def setUpClass(cls)
- Set up for the doc tests -
def test_pep8_conformance_city(self)
- Test that models/city.py conforms to PEP8 -
def test_pep8_conformance_test_city(self)
- Test that tests/test_models/test_city.py conforms to PEP8 -
def test_city_module_docstring(self)
- Test for the city.py module docstring -
def test_city_class_docstring(self)
- Test for the City class docstring
/test_models/test_file_storage.py - Contains the TestFileStorageDocs class:
-
def setUpClass(cls)
- Set up for the doc tests -
def test_pep8_conformance_file_storage(self)
- Test that models/file_storage.py conforms to PEP8 -
def test_pep8_conformance_test_file_storage(self)
- Test that tests/test_models/test_file_storage.py conforms to PEP8 -
def test_file_storage_module_docstring(self)
- Test for the file_storage.py module docstring -
def test_file_storage_class_docstring(self)
- Test for the FileStorage class docstring
/test_models/test_place.py - Contains the TestPlaceDoc class:
-
def setUpClass(cls)
- Set up for the doc tests -
def test_pep8_conformance_place(self)
- Test that models/place.py conforms to PEP8. -
def test_pep8_conformance_test_place(self)
- Test that tests/test_models/test_place.py conforms to PEP8. -
def test_place_module_docstring(self)
- Test for the place.py module docstring -
def test_place_class_docstring(self)
- Test for the Place class docstring
/test_models/test_review.py - Contains the TestReviewDocs class:
-
def setUpClass(cls)
- Set up for the doc tests -
def test_pep8_conformance_review(self)
- Test that models/review.py conforms to PEP8 -
def test_pep8_conformance_test_review(self)
- Test that tests/test_models/test_review.py conforms to PEP8 -
def test_review_module_docstring(self)
- Test for the review.py module docstring -
def test_review_class_docstring(self)
- Test for the Review class docstring
/test_models/state.py - Contains the TestStateDocs class:
-
def setUpClass(cls)
- Set up for the doc tests -
def test_pep8_conformance_state(self)
- Test that models/state.py conforms to PEP8 -
def test_pep8_conformance_test_state(self)
- Test that tests/test_models/test_state.py conforms to PEP8 -
def test_state_module_docstring(self)
- Test for the state.py module docstring -
def test_state_class_docstring(self)
- Test for the State class docstring
/test_models/user.py - Contains the TestUserDocs class:
-
def setUpClass(cls)
- Set up for the doc tests -
def test_pep8_conformance_user(self)
- Test that models/user.py conforms to PEP8 -
def test_pep8_conformance_test_user(self)
- Test that tests/test_models/test_user.py conforms to PEP8 -
def test_user_module_docstring(self)
- Test for the user.py module docstring -
def test_user_class_docstring(self)
- Test for the User class docstring
vagrantAirBnB_clone$./console.py
(hbnb) help
Documented commands (type help <topic>):
========================================
EOF all create destroy help quit show update
(hbnb) all MyModel
** class doesn't exist **
(hbnb) create BaseModel
7da56403-cc45-4f1c-ad32-bfafeb2bb050
(hbnb) all BaseModel
[[BaseModel] (7da56403-cc45-4f1c-ad32-bfafeb2bb050) {'updated_at': datetime.datetime(2017, 9, 28, 9, 50, 46, 772167), 'id': '7da56403-cc45-4f1c-ad32-bfafeb2bb050', 'created_at': datetime.datetime(2017, 9, 28, 9, 50, 46, 772123)}]
(hbnb) show BaseModel 7da56403-cc45-4f1c-ad32-bfafeb2bb050
[BaseModel] (7da56403-cc45-4f1c-ad32-bfafeb2bb050) {'updated_at': datetime.datetime(2017, 9, 28, 9, 50, 46, 772167), 'id': '7da56403-cc45-4f1c-ad32-bfafeb2bb050', 'created_at': datetime.datetime(2017, 9, 28, 9, 50, 46, 772123)}
(hbnb) destroy BaseModel 7da56403-cc45-4f1c-ad32-bfafeb2bb050
(hbnb) show BaseModel 7da56403-cc45-4f1c-ad32-bfafeb2bb050
** no instance found **
(hbnb) quit
No known bugs at this time.
Alexa Orrico - Github / Twitter
Jennifer Huang - Github / Twitter
Second part of Airbnb: Joann Vuong
Public Domain. No copy write protection.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for AirBnB_clone_v2
Similar Open Source Tools
AirBnB_clone_v2
The AirBnB Clone - The Console project is the first segment of the AirBnB project at Holberton School, aiming to cover fundamental concepts of higher level programming. The goal is to deploy a server as a simple copy of the AirBnB Website (HBnB). The project includes a command interpreter to manage objects for the AirBnB website, allowing users to create new objects, retrieve objects, perform operations on objects, update object attributes, and destroy objects. The project is interpreted/tested on Ubuntu 14.04 LTS using Python 3.4.3.
fastfit
FastFit is a Python package designed for fast and accurate few-shot classification, especially for scenarios with many semantically similar classes. It utilizes a novel approach integrating batch contrastive learning and token-level similarity score, significantly improving multi-class classification performance in speed and accuracy across various datasets. FastFit provides a convenient command-line tool for training text classification models with customizable parameters. It offers a 3-20x improvement in training speed, completing training in just a few seconds. Users can also train models with Python scripts and perform inference using pretrained models for text classification tasks.
llm-scraper
LLM Scraper is a TypeScript library that allows you to convert any webpages into structured data using LLMs. It supports Local (GGUF), OpenAI, Groq chat models, and schemas defined with Zod. With full type-safety in TypeScript and based on the Playwright framework, it offers streaming when crawling multiple pages and supports four input modes: html, markdown, text, and image.
LLMTSCS
LLMLight is a novel framework that employs Large Language Models (LLMs) as decision-making agents for Traffic Signal Control (TSC). The framework leverages the advanced generalization capabilities of LLMs to engage in a reasoning and decision-making process akin to human intuition for effective traffic control. LLMLight has been demonstrated to be remarkably effective, generalizable, and interpretable against various transportation-based and RL-based baselines on nine real-world and synthetic datasets.
python-tgpt
Python-tgpt is a Python package that enables seamless interaction with over 45 free LLM providers without requiring an API key. It also provides image generation capabilities. The name _python-tgpt_ draws inspiration from its parent project tgpt, which operates on Golang. Through this Python adaptation, users can effortlessly engage with a number of free LLMs available, fostering a smoother AI interaction experience.
LLM-as-HH
LLM-as-HH is a codebase that accompanies the paper ReEvo: Large Language Models as Hyper-Heuristics with Reflective Evolution. It introduces Language Hyper-Heuristics (LHHs) that leverage LLMs for heuristic generation with minimal manual intervention and open-ended heuristic spaces. Reflective Evolution (ReEvo) is presented as a searching framework that emulates the reflective design approach of human experts while surpassing human capabilities with scalable LLM inference, Internet-scale domain knowledge, and powerful evolutionary search. The tool can improve various algorithms on problems like Traveling Salesman Problem, Capacitated Vehicle Routing Problem, Orienteering Problem, Multiple Knapsack Problems, Bin Packing Problem, and Decap Placement Problem in both black-box and white-box settings.
litserve
LitServe is a high-throughput serving engine for deploying AI models at scale. It generates an API endpoint for a model, handles batching, streaming, autoscaling across CPU/GPUs, and more. Built for enterprise scale, it supports every framework like PyTorch, JAX, Tensorflow, and more. LitServe is designed to let users focus on model performance, not the serving boilerplate. It is like PyTorch Lightning for model serving but with broader framework support and scalability.
backend.ai
Backend.AI is a streamlined, container-based computing cluster platform that hosts popular computing/ML frameworks and diverse programming languages, with pluggable heterogeneous accelerator support including CUDA GPU, ROCm GPU, TPU, IPU and other NPUs. It allocates and isolates the underlying computing resources for multi-tenant computation sessions on-demand or in batches with customizable job schedulers with its own orchestrator. All its functions are exposed as REST/GraphQL/WebSocket APIs.
llm-leaderboard
Nejumi Leaderboard 3 is a comprehensive evaluation platform for large language models, assessing general language capabilities and alignment aspects. The evaluation framework includes metrics for language processing, translation, summarization, information extraction, reasoning, mathematical reasoning, entity extraction, knowledge/question answering, English, semantic analysis, syntactic analysis, alignment, ethics/moral, toxicity, bias, truthfulness, and robustness. The repository provides an implementation guide for environment setup, dataset preparation, configuration, model configurations, and chat template creation. Users can run evaluation processes using specified configuration files and log results to the Weights & Biases project.
receipt-scanner
The receipt-scanner repository is an AI-Powered Receipt and Invoice Scanner for Laravel that allows users to easily extract structured receipt data from images, PDFs, and emails within their Laravel application using OpenAI. It provides a light wrapper around OpenAI Chat and Completion endpoints, supports various input formats, and integrates with Textract for OCR functionality. Users can install the package via composer, publish configuration files, and use it to extract data from plain text, PDFs, images, Word documents, and web content. The scanned receipt data is parsed into a DTO structure with main classes like Receipt, Merchant, and LineItem.
nano-graphrag
nano-GraphRAG is a simple, easy-to-hack implementation of GraphRAG that provides a smaller, faster, and cleaner version of the official implementation. It is about 800 lines of code, small yet scalable, asynchronous, and fully typed. The tool supports incremental insert, async methods, and various parameters for customization. Users can replace storage components and LLM functions as needed. It also allows for embedding function replacement and comes with pre-defined prompts for entity extraction and community reports. However, some features like covariates and global search implementation differ from the original GraphRAG. Future versions aim to address issues related to data source ID, community description truncation, and add new components.
AnglE
AnglE is a library for training state-of-the-art BERT/LLM-based sentence embeddings with just a few lines of code. It also serves as a general sentence embedding inference framework, allowing for inferring a variety of transformer-based sentence embeddings. The library supports various loss functions such as AnglE loss, Contrastive loss, CoSENT loss, and Espresso loss. It provides backbones like BERT-based models, LLM-based models, and Bi-directional LLM-based models for training on single or multi-GPU setups. AnglE has achieved significant performance on various benchmarks and offers official pretrained models for both BERT-based and LLM-based models.
LL3DA
LL3DA is a Large Language 3D Assistant that responds to both visual and textual interactions within complex 3D environments. It aims to help Large Multimodal Models (LMM) comprehend, reason, and plan in diverse 3D scenes by directly taking point cloud input and responding to textual instructions and visual prompts. LL3DA achieves remarkable results in 3D Dense Captioning and 3D Question Answering, surpassing various 3D vision-language models. The code is fully released, allowing users to train customized models and work with pre-trained weights. The tool supports training with different LLM backends and provides scripts for tuning and evaluating models on various tasks.
llama3.java
Llama3.java is a practical Llama 3 inference tool implemented in a single Java file. It serves as the successor of llama2.java and is designed for testing and tuning compiler optimizations and features on the JVM, especially for the Graal compiler. The tool features a GGUF format parser, Llama 3 tokenizer, Grouped-Query Attention inference, support for Q8_0 and Q4_0 quantizations, fast matrix-vector multiplication routines using Java's Vector API, and a simple CLI with 'chat' and 'instruct' modes. Users can download quantized .gguf files from huggingface.co for model usage and can also manually quantize to pure 'Q4_0'. The tool requires Java 21+ and supports running from source or building a JAR file for execution. Performance benchmarks show varying tokens/s rates for different models and implementations on different hardware setups.
LLM-Tuning
LLM-Tuning is a collection of tools and resources for fine-tuning large language models (LLMs). It includes a library of pre-trained LoRA models, a set of tutorials and examples, and a community forum for discussion and support. LLM-Tuning makes it easy to fine-tune LLMs for a variety of tasks, including text classification, question answering, and dialogue generation. With LLM-Tuning, you can quickly and easily improve the performance of your LLMs on downstream tasks.
pebblo
Pebblo enables developers to safely load data and promote their Gen AI app to deployment without worrying about the organization’s compliance and security requirements. The project identifies semantic topics and entities found in the loaded data and summarizes them on the UI or a PDF report.
For similar tasks
AirBnB_clone_v2
The AirBnB Clone - The Console project is the first segment of the AirBnB project at Holberton School, aiming to cover fundamental concepts of higher level programming. The goal is to deploy a server as a simple copy of the AirBnB Website (HBnB). The project includes a command interpreter to manage objects for the AirBnB website, allowing users to create new objects, retrieve objects, perform operations on objects, update object attributes, and destroy objects. The project is interpreted/tested on Ubuntu 14.04 LTS using Python 3.4.3.
db-ally
db-ally is a library for creating natural language interfaces to data sources. It allows developers to outline specific use cases for a large language model (LLM) to handle, detailing the desired data format and the possible operations to fetch this data. db-ally effectively shields the complexity of the underlying data source from the model, presenting only the essential information needed for solving the specific use cases. Instead of generating arbitrary SQL, the model is asked to generate responses in a simplified query language.
aiocache
Aiocache is an asyncio cache library that supports multiple backends such as memory, redis, and memcached. It provides a simple interface for functions like add, get, set, multi_get, multi_set, exists, increment, delete, clear, and raw. Users can easily install and use the library for caching data in Python applications. Aiocache allows for easy instantiation of caches and setup of cache aliases for reusing configurations. It also provides support for backends, serializers, and plugins to customize cache operations. The library offers detailed documentation and examples for different use cases and configurations.
ask-astro
Ask Astro is an open-source reference implementation of Andreessen Horowitz's LLM Application Architecture built by Astronomer. It provides an end-to-end example of a Q&A LLM application used to answer questions about Apache Airflow® and Astronomer. Ask Astro includes Airflow DAGs for data ingestion, an API for business logic, a Slack bot, a public UI, and DAGs for processing user feedback. The tool is divided into data retrieval & embedding, prompt orchestration, and feedback loops.
For similar jobs
lollms-webui
LoLLMs WebUI (Lord of Large Language Multimodal Systems: One tool to rule them all) is a user-friendly interface to access and utilize various LLM (Large Language Models) and other AI models for a wide range of tasks. With over 500 AI expert conditionings across diverse domains and more than 2500 fine tuned models over multiple domains, LoLLMs WebUI provides an immediate resource for any problem, from car repair to coding assistance, legal matters, medical diagnosis, entertainment, and more. The easy-to-use UI with light and dark mode options, integration with GitHub repository, support for different personalities, and features like thumb up/down rating, copy, edit, and remove messages, local database storage, search, export, and delete multiple discussions, make LoLLMs WebUI a powerful and versatile tool.
Azure-Analytics-and-AI-Engagement
The Azure-Analytics-and-AI-Engagement repository provides packaged Industry Scenario DREAM Demos with ARM templates (Containing a demo web application, Power BI reports, Synapse resources, AML Notebooks etc.) that can be deployed in a customer’s subscription using the CAPE tool within a matter of few hours. Partners can also deploy DREAM Demos in their own subscriptions using DPoC.
minio
MinIO is a High Performance Object Storage released under GNU Affero General Public License v3.0. It is API compatible with Amazon S3 cloud storage service. Use MinIO to build high performance infrastructure for machine learning, analytics and application data workloads.
mage-ai
Mage is an open-source data pipeline tool for transforming and integrating data. It offers an easy developer experience, engineering best practices built-in, and data as a first-class citizen. Mage makes it easy to build, preview, and launch data pipelines, and provides observability and scaling capabilities. It supports data integrations, streaming pipelines, and dbt integration.
AiTreasureBox
AiTreasureBox is a versatile AI tool that provides a collection of pre-trained models and algorithms for various machine learning tasks. It simplifies the process of implementing AI solutions by offering ready-to-use components that can be easily integrated into projects. With AiTreasureBox, users can quickly prototype and deploy AI applications without the need for extensive knowledge in machine learning or deep learning. The tool covers a wide range of tasks such as image classification, text generation, sentiment analysis, object detection, and more. It is designed to be user-friendly and accessible to both beginners and experienced developers, making AI development more efficient and accessible to a wider audience.
tidb
TiDB is an open-source distributed SQL database that supports Hybrid Transactional and Analytical Processing (HTAP) workloads. It is MySQL compatible and features horizontal scalability, strong consistency, and high availability.
airbyte
Airbyte is an open-source data integration platform that makes it easy to move data from any source to any destination. With Airbyte, you can build and manage data pipelines without writing any code. Airbyte provides a library of pre-built connectors that make it easy to connect to popular data sources and destinations. You can also create your own connectors using Airbyte's no-code Connector Builder or low-code CDK. Airbyte is used by data engineers and analysts at companies of all sizes to build and manage their data pipelines.
labelbox-python
Labelbox is a data-centric AI platform for enterprises to develop, optimize, and use AI to solve problems and power new products and services. Enterprises use Labelbox to curate data, generate high-quality human feedback data for computer vision and LLMs, evaluate model performance, and automate tasks by combining AI and human-centric workflows. The academic & research community uses Labelbox for cutting-edge AI research.