classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence.
Stars: 572
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.
README:
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence.
You can learn more about ClassifAI's features at ClassifAIPlugin.com and documentation at the ClassifAI documentation site.
Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.
- Generate a summary of post content and store it as an excerpt using OpenAI's ChatGPT API, Microsoft Azure's OpenAI service or Google's Gemini API
- Generate titles from post content using OpenAI's ChatGPT API, Microsoft Azure's OpenAI service or Google's Gemini API
- Expand or condense text content using OpenAI's ChatGPT API, Microsoft Azure's OpenAI service or Google's Gemini API
- Generate new images on demand to use in-content or as a featured image using OpenAI's DALL·E 3 API
- Generate transcripts of audio files using OpenAI's Whisper API
- Moderate incoming comments for sensitive content using OpenAI's Moderation API
- Convert text content into audio and output a "read-to-me" feature on the front-end to play this audio using Microsoft Azure's Text to Speech API, Amazon Polly or OpenAI's Text to Speech API
- Classify post content using IBM Watson's Natural Language Understanding API, OpenAI's Embedding API or Microsoft Azure's OpenAI service
- Create a smart 404 page that has a recommended results section that suggests relevant content to the user based on the page URL they were trying to access using either OpenAI's Embedding API or Microsoft Azure's OpenAI service in combination with ElasticPress
- BETA: Recommend content based on overall site traffic via Microsoft Azure's AI Personalizer API (note that this service has been deprecated by Microsoft and as such, will no longer work. We are looking to replace this with a new provider to maintain the same functionality (see issue#392)
- Generate image alt text, image tags, and smartly crop images using Microsoft Azure's AI Vision API
- Scan images and PDF files for embedded text and save for use in post meta using Microsoft Azure's AI Vision API
- Bulk classify content with WP-CLI
Tagging | Recommended Content | Excerpt Generation | Comment Moderation |
---|---|---|---|
Audio Transcripts | Title Generation | Expand or Condense Text | Text to Speech |
---|---|---|---|
Alt Text | Smart Cropping | Tagging | Generate Images |
---|---|---|---|
- PHP 7.4+
- WordPress 6.1+
- To utilize the NLU Language Processing functionality, you will need an active IBM Watson account.
- To utilize the ChatGPT, Embeddings, Text to Speech or Whisper Language Processing functionality or DALL·E Image Processing functionality, you will need an active OpenAI account.
- To utilize the Azure AI Vision Image Processing functionality or Text to Speech Language Processing functionality, you will need an active Microsoft Azure account.
- To utilize the Azure OpenAI Language Processing functionality, you will need an active Microsoft Azure account and you will need to apply for OpenAI access.
- To utilize the Google Gemini Language Processing functionality, you will need an active Google Gemini account.
- To utilize the AWS Language Processing functionality, you will need an active AWS account.
- To utilize the Smart 404 feature, you will need to use ElasticPress 5.0.0+ and Elasticsearch 7.0+.
Note that there is no cost to using ClassifAI itself. Both IBM Watson and Microsoft Azure have free plans for some of their AI services, but above those free plans there are paid levels as well. So if you expect to process a high volume of content, then you'll want to review the pricing plans for these services to understand if you'll incur any costs. For the most part, both services' free plans are quite generous and should at least allow for testing ClassifAI to better understand its featureset and could at best allow for totally free usage. OpenAI has a limited trial option that can be used for testing but will require a valid paid plan after that.
IBM Watson's Natural Language Understanding ("NLU"), which is one of the providers that powers the classification feature, has a "lite" pricing tier that offers 30,000 free NLU items per month.
OpenAI, which is one of the providers that powers the classification, title generation, excerpt generation, content resizing, audio transcripts generation, text to speech, moderation and image generation features, has a limited free trial and then requires a pay per usage plan.
Microsoft Azure AI Vision, which is one of the providers that powers the descriptive text generator, image tags generator, image cropping, image text extraction and PDF text extraction features, has a "free" pricing tier that offers 20 transactions per minute and 5,000 transactions per month.
Microsoft Azure AI Speech, which is one of the providers that powers the text to speech feature, has a "free" pricing tier that offers 0.5 million characters per month.
Microsoft Azure AI Personalizer, which is one of the providers that powers the recommended content feature, has a "free" pricing tier that offers 50,000 transactions per month.
Microsoft Azure OpenAI, which is one of the providers that powers the title generation, excerpt generation and content resizing features, has a pay per usage plan.
Google Gemini, which is one of the providers that powers the title generation, excerpt generation and content resizing features, has a "free" pricing tier that offers 60 queries per minute.
git clone https://github.com/10up/classifai.git && cd classifai
composer install && npm install && npm run build
ClassifAI releases can be installed via Composer.
Instruct Composer to install ClassifAI into the plugins directory by adding or modifying the "extra" section of your project's composer.json file to match the following:
"extra": {
"installer-paths": {
"plugins/{$name}": [
"type:wordpress-plugin"
]
}
}
Add this repository to composer.json, specifying a release version, as shown below:
"repositories": [
{
"type": "package",
"package": {
"name": "10up/classifai",
"version": "3.1.1",
"type": "wordpress-plugin",
"dist": {
"url": "https://github.com/10up/classifai/archive/refs/tags/3.1.1.zip",
"type": "zip"
}
}
}
]
Finally, require the plugin, using the version number you specified in the previous step:
"require": {
"10up/classifai": "3.1.1"
}
After you run composer update
, ClassifAI will be installed in the plugins directory with no build steps needed.
ClassifAI is a sophisticated solution that we want organizations of all shapes and sizes to count on. To keep adopters apprised of major updates and beta testing opportunities, gather feedback, support auto updates, and prioritize common use cases, we're asking for a little bit of information in exchange for a free key. Your information will be kept confidential.
- Register for a free ClassifAI account here.
- Check for an email from
ClassifAI Team
which contains the registration key. - Note that the email will be sent from
[email protected]
, so please whitelist this email address if needed.
- In the
Registered Email
field, enter the email you used for registration. - In the
Registration Key
field, enter the registration key from the email in step 1 above.
- Register for an IBM Cloud account or sign into your existing one.
- Check for an email from
IBM Cloud
and click theConfirm Account
link. - Log into your account (accepting the privacy policy) and create a new Natural Language Understanding Resource if you do not already have one. It may take a minute for your account to fully populate with the default resource group to use.
- Click
Manage
in the left hand menu, thenShow credentials
on the Manage page to view the credentials for this resource.
- Select IBM Watson NLU in the provider dropdown.
The credentials screen will show either an API key or a username/password combination.
- In the
API URL
field enter the URL - Enter your API Key in the
API Key
field.
- In the
API URL
field enter the URL - Enter the
username
value into theAPI Username
. - Enter the
password
into theAPI Key
field.
IBM Watson endpoint urls with watsonplatform.net
were deprecated on 26 May 2021. The pattern for the new endpoint URLs is api.{location}.{offering}.watson.cloud.ibm.com
. For example, Watson's NLU service offering endpoint will be like: api.{location}.natural-language-understanding.watson.cloud.ibm.com
For more information, see https://cloud.ibm.com/docs/watson?topic=watson-endpoint-change.
IBM Watson's Categories, Keywords, Concepts & Entities can each be stored in existing WordPress taxonomies or a custom Watson taxonomy.
3. Configure Post Types to classify and IBM Watson Features to enable under ClassifAI > Language Processing > Classification
- Choose which public post types to classify when saved.
- Choose whether to assign category, keyword, entity, and concept as well as the thresholds and taxonomies used for each.
- Sign up for an OpenAI account or sign into your existing one.
- If creating a new account, complete the verification process (requires confirming your email and phone number).
- Log into your account and go to the API key page.
- Click
Create new secret key
and copy the key that is shown.
2. Configure OpenAI API Keys under Tools > ClassifAI > Language Processing > Title Generation, Excerpt Generation or Content Resizing
- Select OpenAI ChatGPT in the provider dropdown.
- Enter your API Key copied from the above step into the
API Key
field.
- For each feature, set any options as needed.
- Save changes and ensure a success message is shown. An error will show if API authentication fails.
- To test excerpt generation, edit (or create) an item that supports excerpts. Note: only the block editor is supported.
- Ensure this item has content saved.
- Open the Excerpt panel in the sidebar and click on
Generate Excerpt
. - To test title generation, edit (or create) an item that supports titles.
- Ensure this item has content saved.
- Open the Summary panel in the sidebar and click on
Generate titles
. - To test content resizing, edit (or create) an item. Note: only the block editor is supported.
- Add a paragraph block with some content.
- With this block selected, select the AI icon in the toolbar and choose to either expand or condense the text.
- In the modal that pops up, select one of the options.
- Register for a Microsoft Azure account or sign into your existing one.
- Request access to Azure OpenAI, if not already granted.
- Log into your account and create a new Azure OpenAI resource if you do not already have one.
- Copy the name you chose for the deployment when deploying the resource in the previous step.
- Click
Keys and Endpoint
in the left hand Resource Management menu to get the endpoint for this resource. - Click the copy icon next to
KEY 1
to copy the API Key credential for this resource.
2. Configure API Keys under Tools > ClassifAI > Language Processing > Title Generation, Excerpt Generation or Content Resizing
- Select Azure OpenAI in the provider dropdown.
- Enter your endpoint you copied from the above step into the
Endpoint URL
field. - Enter your API Key copied from the above step into the
API key
field. - Enter your deployment name copied from the above step into the
Deployment name
field.
- Check the "Enable" checkbox in above screen.
- Set the other options as needed.
- Save changes and ensure a success message is shown. An error will show if API authentication fails.
- To test excerpt generation, edit (or create) an item that supports excerpts.
- Ensure this item has content saved.
- Open the Excerpt panel in the sidebar and click on
Generate Excerpt
. - To test title generation, edit (or create) an item that supports titles.
- Ensure this item has content saved.
- Open the Summary panel in the sidebar and click on
Generate titles
. - To test content resizing, edit (or create) an item. Note: only the block editor is supported.
- Add a paragraph block with some content.
- With this block selected, select the AI icon in the toolbar and choose to either expand or condense the text.
- In the modal that pops up, select one of the options.
- Sign up for a Google account or sign into your existing one.
- Go to Google AI Gemini website and click on the Get API key button or go to the API key page directly.
- Note that if this page doesn't work, it's likely that Gemini is not enabled in your workspace. Contact your workspace administrator to get this enabled.
- Click
Create API key
and copy the key that is shown.
2. Configure API Keys under Tools > ClassifAI > Language Processing > Title Generation, Excerpt Generation or Content Resizing
- Select Google AI (Gemini API) in the provider dropdown.
- Enter your API Key copied from the above step into the
API Key
field.
- Check the "Enable" checkbox in above screen.
- Set the other options as needed.
- Save changes and ensure a success message is shown. An error will show if API authentication fails.
- To test excerpt generation, edit (or create) an item that supports excerpts.
- Ensure this item has content saved.
- Open the Excerpt panel in the sidebar and click on
Generate Excerpt
. - To test title generation, edit (or create) an item that supports titles.
- Ensure this item has content saved.
- Open the Summary panel in the sidebar and click on
Generate titles
. - To test content resizing, edit (or create) an item. Note: only the block editor is supported.
- Add a paragraph block with some content.
- With this block selected, select the AI icon in the toolbar and choose to either expand or condense the text.
- In the modal that pops up, select one of the options.
- Sign up for an OpenAI account or sign into your existing one.
- If creating a new account, complete the verification process (requires confirming your email and phone number).
- Log into your account and go to the API key page.
- Click
Create new secret key
and copy the key that is shown.
- Select OpenAI Embeddings in the provider dropdown.
- Enter your API Key copied from the above step into the
API Key
field.
- Choose to automatically classify content.
- Set the other options as needed.
- Save changes and ensure a success message is shown. An error will show if API authentication fails.
- Create one or more terms within the taxonomy (or taxonomies) chosen in settings.
- Create a new piece of content that matches the post type and post status chosen in settings.
- Open the taxonomy panel in the sidebar and see terms that were auto-applied.
Note that OpenAI can create a transcript for audio files that meet the following requirements:
- The file must be presented in mp3, mp4, mpeg, mpga, m4a, wav, or webm format
- The file size must be less than 25 megabytes (MB)
- Sign up for an OpenAI account or sign into your existing one.
- If creating a new account, complete the verification process (requires confirming your email and phone number).
- Log into your account and go to the API key page.
- Click
Create new secret key
and copy the key that is shown.
2. Configure OpenAI API Keys under Tools > ClassifAI > Language Processing > Audio Transcripts Generation
- Select OpenAI Embeddings in the provider dropdown.
- Enter your API Key copied from the above step into the
API Key
field.
- Choose to enable the ability to automatically generate transcripts from supported audio files.
- Choose which user roles have access to this ability.
- Save changes and ensure a success message is shown. An error will show if API authentication fails.
- Upload a new audio file.
- Check to make sure the transcript was stored in the Description field.
- Register for a Microsoft Azure account or sign into your existing one.
- Log into your account and create a new Speech Service if you do not already have one. It may take a minute for your account to fully populate with the default resource group to use.
- Click
Keys and Endpoint
in the left hand Resource Management menu to view theLocation/Region
for this resource. - Click the copy icon next to
KEY 1
to copy the API Key credential for this resource.
2. Configure Microsoft Azure API and Key under Tools > ClassifAI > Language Processing > Text to Speech
- Select Microsoft Azure AI Speech in the provider dropdown.
- In the
Endpoint URL
field, enter the following URL, replacingLOCATION
with theLocation/Region
you found above:https://LOCATION.tts.speech.microsoft.com/
. - In the
API Key
field, enter yourKEY 1
copied from above. - Click Save Changes (the page will reload).
- If connected successfully, a new dropdown with the label "Voices" will be displayed.
- Select a voice as per your choice.
- Select a post type that should use this service.
- Assuming the post type selected is "post", create a new post and publish it.
- After a few seconds, a "Preview" button will appear under the ClassifAI settings panel.
- Click the button to preview the generated speech audio for the post.
- View the post on the front-end and see a read-to-me feature has been added
- Sign up for an OpenAI account or sign into your existing one.
- If creating a new account, complete the verification process (requires confirming your email and phone number).
- Log into your account and go to the API key page.
- Click
Create new secret key
and copy the key that is shown.
- Select OpenAI Text to Speech in the provider dropdown.
- Enter your API Key copied from the above step into the
API Key
field.
- Assuming the post type selected is "post", create a new post and publish it.
- After a few seconds, a "Preview" button will appear under the ClassifAI settings panel.
- Click the button to preview the generated speech audio for the post.
- View the post on the front-end and see a read-to-me feature has been added
- Register for a AWS account or sign into your existing one.
- Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/
- Create IAM User (If you don't have any IAM user)
- In the navigation pane, choose Users and then click Create user
- On the Specify user details page, under User details, in User name, enter the name for the new user.
- Click Next
- On the Set permissions page, under Permissions options, select Attach policies directly
- Under Permissions policies, search for the policy polly and select AmazonPollyFullAccess Policy
- Click Next
- On the Review and create page, Review all of the choices you made up to this point. When you are ready to proceed, Click Create user.
- In the navigation pane, choose Users
- Choose the name of the user for which you want to create access keys, and then choose the Security credentials tab.
- In the Access keys section, click Create access key.
- On the Access key best practices & alternatives page, select Application running outside AWS
- Click Next
- On the Retrieve access key page, choose Show to reveal the value of your user's secret access key.
- Copy and save the credentials in a secure location on your computer or click "Download .csv file" to save the access key ID and secret access key to a
.csv
file.
- Select Amazon Polly in the provider dropdown.
- In the
AWS access key
field, enter theAccess key
copied from above. - In the
AWS secret access key
field, enter yourSecret access key
copied from above. - In the
AWS Region
field, enter your AWS region value eg:us-east-1
- Click Save Changes (the page will reload).
- If connected successfully, a new dropdown with the label "Voices" will be displayed.
- Select a voice and voice engine as per your choice.
- Select a post type that should use this service.
- Assuming the post type selected is "post", create a new post and publish it.
- After a few seconds, a "Preview" button will appear under the ClassifAI settings panel.
- Click the button to preview the generated speech audio for the post.
- View the post on the front-end and see a read-to-me feature has been added
- This Feature is powered by either OpenAI or Azure OpenAI.
- Once you've chosen a Provider, you'll need to create an account and get authentication details.
- When setting things up on the Azure side, ensure you choose either the
text-embedding-3-small
ortext-embedding-3-large
model. The Feature will not work with other models.
- When setting things up on the Azure side, ensure you choose either the
- Select the proper Provider in the provider dropdown.
- Enter your authentication details.
- Configure any other settings as desired.
Once the Smart 404 Feature is configured, you can then proceed to get ElasticPress set up to index the data.
If on a standard WordPress installation:
- Install and activate the ElasticPress plugin.
- Set your Elasticsearch URL in the ElasticPress settings (
ElasticPress > Settings
). - Go to the
ElasticPress > Sync
settings page and trigger a sync, ensuring this is set to run a sync from scratch. This will send over the new schema to Elasticsearch and index all content, including creating vector embeddings for each post.
If on a WordPress VIP hosted environment:
- Enable Enterprise Search
-
Run the VIP-CLI
index
command. This sends the new schema to Elasticsearch and indexes all content, including creating vector embeddings for each post. Note you may need to use the--setup
flag to ensure the schema is created correctly.
At this point all of your content should be indexed, along with the embeddings data. You'll then need to update your 404 template to display the recommended results.
The Smart 404 Feature comes with a few helper functions that can be used to display the recommended results on your 404 page:
- Directly display the results using the
Classifai\render_smart_404_results()
function. - Get the data and then display it in your own way using the
Classifai\get_smart_404_results()
function.
You will need to directly integrate these functions into your 404 template where desired. The plugin does not automatically display the results on the 404 page for you.
Both functions support the following arguments. If any argument is not provided, the default value set on the settings page will be used:
-
$index
(string) - The ElasticPress index to search in. Default ispost
. -
$num
(int) - Maximum number of results to display. Default is5
. -
$num_candidates
(int) - Maximum number of results to search over. Default is5000
. -
$rescore
(bool) - Whether to run a rescore query or not. Can give better results but often is slower. Default isfalse
. -
$score_function
(string) - The vector scoring function to use. Default iscosine
. Options arecosine
,dot_product
,l1_norm
andl2_norm
.
The Classifai\render_smart_404_results()
function also supports the following additional arguments:
-
$fallback
(bool) - Whether to run a fallback WordPress query if no results are found in Elasticsearch. These results will then be rendered. Default istrue
.
Examples:
// Render the results.
Classifai\render_smart_404_results(
[
'index' => 'post',
'num' => 3,
'num_candidates' => 1000,
'rescore' => true,
'fallback' => true,
'score_function' => 'dot_product',
]
);
// Get the results.
$results = Classifai\get_smart_404_results(
[
'index' => 'post',
'num' => 10,
'num_candidates' => 8000,
'rescore' => false,
'score_function' => 'cosine',
]
);
ob_start();
// Render the results.
foreach ( $results as $result ) {
?>
<div>
<?php if ( has_post_thumbnail( $result->ID ) ) : ?>
<figure>
<a href="<?php echo esc_url( get_permalink( $result->ID ) ); ?>">
<?php echo wp_kses_post( get_the_post_thumbnail( $result->ID ) ); ?>
</a>
</figure>
<?php endif; ?>
<a href="<?php echo esc_url( get_permalink( $result->ID ) ); ?>">
<?php echo esc_html( $result->post_title ); ?>
</a>
</div>
<?php
}
$output = ob_get_clean();
echo $output;
If you want to quickly test things locally, ensure you have Docker installed (Docker Desktop recommended) and then run the following command:
docker run -p 9200:9200 -d --name elasticsearch \
-e "discovery.type=single-node" \
-e "xpack.security.enabled=false" \
-e "xpack.security.http.ssl.enabled=false" \
-e "xpack.license.self_generated.type=basic" \
docker.elastic.co/elasticsearch/elasticsearch:7.9.0
This will download, install and start Elasticsearch v7.9.0 to your local machine. You can then access Elasticsearch at http://localhost:9200
, which is the same URL you can use to configure ElasticPress with. It is recommended that you change the Content Items per Index Cycle
setting in ElasticPress to 20
to ensure indexing doesn't timeout. Also be aware of API rate limits on the OpenAI Embeddings API.
Note that Azure AI Vision can analyze and crop images that meet the following requirements:
- The image must be presented in JPEG, PNG, GIF, or BMP format
- The file size of the image must be less than 4 megabytes (MB)
- The dimensions of the image must be greater than 50 x 50 pixels
- The file must be externally accessible via URL (i.e. local sites and setups that block direct file access will not work out of the box)
- Register for a Microsoft Azure account or sign into your existing one.
- Log into your account and create a new Azure AI Vision Service if you do not already have one. It may take a minute for your account to fully populate with the default resource group to use.
- Click
Keys and Endpoint
in the left hand Resource Management menu to view theEndpoint
URL for this resource. - Click the copy icon next to
KEY 1
to copy the API Key credential for this resource.
2. Configure Microsoft Azure API and Key under Tools > ClassifAI > Image Processing > Descriptive Text Generator, Image Tags Generator, Image Cropping, Image Text Extraction or PDF Text Extraction
- Select Microsoft Azure AI Vision in the provider dropdown.
- In the
Endpoint URL
field, enter yourAPI endpoint
. - In the
API Key
field, enter yourKEY 1
.
- For features that have thresholds or taxonomy settings, set those as needed.
- Image tagging uses Azure's Describe Image
- Sign up for an OpenAI account or sign into your existing one.
- If creating a new account, complete the verification process (requires confirming your email and phone number).
- Log into your account and go to the API key page.
- Click
Create new secret key
and copy the key that is shown.
- Select OpenAI DALL·E 3 in the provider dropdown.
- Enter your API Key copied from the above step into the
API Key
field.
- Choose to add the ability to generate images.
- If image generation is configured, set the other options as needed.
- Save changes and ensure a success message is shown. An error will show if API authentication fails.
- Create a new content item
- Insert an Image block or choose to add a featured image and choose a new item from the Media Library
- In the media modal that opens, click on the
Generate image
tab - Enter in a prompt to generate an image
- Once images are generated, choose one or more images to import into your media library
- Choose one image to insert into the content
- Sign up for an OpenAI account or sign into your existing one.
- If creating a new account, complete the verification process (requires confirming your email and phone number).
- Log into your account and go to the API key page.
- Click
Create new secret key
and copy the key that is shown.
- Select OpenAI Moderation in the provider dropdown.
- Enter your API Key copied from the above step into the
API Key
field.
- Select the "Enable" checkbox in above screen.
- Select "Comments" in the "Content to moderate" section.
Azure AI Personalizer has been retired by Microsoft as of September 2023. The service will continue to work until 2026 but Personalizer resources can no longer be created. As such, consider this provider deprecated and be aware that we will be removing this in the near future. We are hoping to replace with a new provider to maintain the same functionality (see issue#392).
Note that Personalizer requires sufficient data volume to enable Personalizer to learn. In general, we recommend a minimum of ~1,000 events per day to ensure Personalizer learns effectively. If Personalizer doesn't receive sufficient data, the service takes longer to determine the best actions.
- Register for a Microsoft Azure account or sign into your existing one.
- Log into your account and create a new Personalizer resource.
- Enter your service name, select a subscription, location, pricing tier, and resource group.
- Select Create to create the resource.
- After your resource has deployed, select the Go to Resource button to go to your Personalizer resource.
- Click
Keys and Endpoint
in the left hand Resource Management menu to view theEndpoint
URL for this resource. - Click the copy icon next to
KEY 1
to copy the API Key credential for this resource.
For more information, see https://docs.microsoft.com/en-us/azure/cognitive-services/personalizer/how-to-create-resource
- In the
Endpoint URL
field, enter yourEndpoint
URL from Step 1 above. - In the
API Key
field, enter yourKEY 1
from Step 1 above.
- Check out the ClassifAI docs.
ClassifAI connects your WordPress site directly to your account with specific service provider(s) (e.g. Microsoft Azure AI, IBM Watson, OpenAI), so no data is gathered by 10up. The data gathered in our registration form is used simply to stay in touch with users so we can provide product updates and news. More information is available in the Privacy Policy on ClassifAIplugin.com.
What are the Categories, Keywords, Concepts, and Entities within the NLU Language Processing feature?
Categories are five levels of hierarchies that IBM Watson can identify from your text. Keywords are specific terms from your text that IBM Watson is able to identify. Concepts are high-level concepts that are not necessarily directly referenced in your text. Entities are people, companies, locations, and classifications that are made by IBM Watson from your text.
Whatever options you have selected in the Category, Keyword, Entity, and Concept taxonomy dropdowns in the NLU classification settings can be viewed within Classic Editor metaboxes and the Block Editor side panel. They can also be viewed in the All Posts and All Pages table list views by utilizing the Screen Options to enable those columns if they're not already appearing in your table list view.
We recommend that you are transparent with your users that AI tools are being used. This can be done by adding a notice to your site's Privacy Policy or similar page. Sample copy is provided below:
This site makes use of Artificial Intelligence tools to help with tasks like language processing, image processing, and content recommendations.
When a post is sent to OpenAI (e.g. to generate a title or excerpt), is the post content fed into OpenAI and used for other customers?
According to OpenAI, they do not train their models on any data that is sent via API requests (see https://openai.com/enterprise-privacy). OpenAI may keep the data for up to 30 days to identify abuse, though you can request zero data retention (ZDR) with a qualifying use-case.
Active: 10up is actively working on this, and we expect to continue work for the foreseeable future including keeping tested up to the most recent version of WordPress. Bug reports, feature requests, questions, and pull requests are welcome.
A complete listing of all notable changes to ClassifAI are documented in CHANGELOG.md.
Please read CODE_OF_CONDUCT.md for details on our code of conduct, CONTRIBUTING.md for details on the process for submitting pull requests to us, and CREDITS.md for a listing of maintainers, contributors, and libraries for ClassifAI.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for classifai
Similar Open Source Tools
classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.
litlytics
LitLytics is an affordable analytics platform leveraging LLMs for automated data analysis. It simplifies analytics for teams without data scientists, generates custom pipelines, and allows customization. Cost-efficient with low data processing costs. Scalable and flexible, works with CSV, PDF, and plain text data formats.
cognita
Cognita is an open-source framework to organize your RAG codebase along with a frontend to play around with different RAG customizations. It provides a simple way to organize your codebase so that it becomes easy to test it locally while also being able to deploy it in a production ready environment. The key issues that arise while productionizing RAG system from a Jupyter Notebook are: 1. **Chunking and Embedding Job** : The chunking and embedding code usually needs to be abstracted out and deployed as a job. Sometimes the job will need to run on a schedule or be trigerred via an event to keep the data updated. 2. **Query Service** : The code that generates the answer from the query needs to be wrapped up in a api server like FastAPI and should be deployed as a service. This service should be able to handle multiple queries at the same time and also autoscale with higher traffic. 3. **LLM / Embedding Model Deployment** : Often times, if we are using open-source models, we load the model in the Jupyter notebook. This will need to be hosted as a separate service in production and model will need to be called as an API. 4. **Vector DB deployment** : Most testing happens on vector DBs in memory or on disk. However, in production, the DBs need to be deployed in a more scalable and reliable way. Cognita makes it really easy to customize and experiment everything about a RAG system and still be able to deploy it in a good way. It also ships with a UI that makes it easier to try out different RAG configurations and see the results in real time. You can use it locally or with/without using any Truefoundry components. However, using Truefoundry components makes it easier to test different models and deploy the system in a scalable way. Cognita allows you to host multiple RAG systems using one app. ### Advantages of using Cognita are: 1. A central reusable repository of parsers, loaders, embedders and retrievers. 2. Ability for non-technical users to play with UI - Upload documents and perform QnA using modules built by the development team. 3. Fully API driven - which allows integration with other systems. > If you use Cognita with Truefoundry AI Gateway, you can get logging, metrics and feedback mechanism for your user queries. ### Features: 1. Support for multiple document retrievers that use `Similarity Search`, `Query Decompostion`, `Document Reranking`, etc 2. Support for SOTA OpenSource embeddings and reranking from `mixedbread-ai` 3. Support for using LLMs using `Ollama` 4. Support for incremental indexing that ingests entire documents in batches (reduces compute burden), keeps track of already indexed documents and prevents re-indexing of those docs.
transcriptionstream
Transcription Stream is a self-hosted diarization service that works offline, allowing users to easily transcribe and summarize audio files. It includes a web interface for file management, Ollama for complex operations on transcriptions, and Meilisearch for fast full-text search. Users can upload files via SSH or web interface, with output stored in named folders. The tool requires a NVIDIA GPU and provides various scripts for installation and running. Ports for SSH, HTTP, Ollama, and Meilisearch are specified, along with access details for SSH server and web interface. Customization options and troubleshooting tips are provided in the documentation.
SalesGPT
SalesGPT is an open-source AI agent designed for sales, utilizing context-awareness and LLMs to work across various communication channels like voice, email, and texting. It aims to enhance sales conversations by understanding the stage of the conversation and providing tools like product knowledge base to reduce errors. The agent can autonomously generate payment links, handle objections, and close sales. It also offers features like automated email communication, meeting scheduling, and integration with various LLMs for customization. SalesGPT is optimized for low latency in voice channels and ensures human supervision where necessary. The tool provides enterprise-grade security and supports LangSmith tracing for monitoring and evaluation of intelligent agents built on LLM frameworks.
SillyTavern
SillyTavern is a user interface you can install on your computer (and Android phones) that allows you to interact with text generation AIs and chat/roleplay with characters you or the community create. SillyTavern is a fork of TavernAI 1.2.8 which is under more active development and has added many major features. At this point, they can be thought of as completely independent programs.
crewAI
CrewAI is a cutting-edge framework designed to orchestrate role-playing autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks. It enables AI agents to assume roles, share goals, and operate in a cohesive unit, much like a well-oiled crew. Whether you're building a smart assistant platform, an automated customer service ensemble, or a multi-agent research team, CrewAI provides the backbone for sophisticated multi-agent interactions. With features like role-based agent design, autonomous inter-agent delegation, flexible task management, and support for various LLMs, CrewAI offers a dynamic and adaptable solution for both development and production workflows.
GraphRAG-Local-UI
GraphRAG Local with Interactive UI is an adaptation of Microsoft's GraphRAG, tailored to support local models and featuring a comprehensive interactive user interface. It allows users to leverage local models for LLM and embeddings, visualize knowledge graphs in 2D or 3D, manage files, settings, and queries, and explore indexing outputs. The tool aims to be cost-effective by eliminating dependency on costly cloud-based models and offers flexible querying options for global, local, and direct chat queries.
UFO
UFO is a UI-focused dual-agent framework to fulfill user requests on Windows OS by seamlessly navigating and operating within individual or spanning multiple applications.
crawlee-python
Crawlee-python is a web scraping and browser automation library that covers crawling and scraping end-to-end, helping users build reliable scrapers fast. It allows users to crawl the web for links, scrape data, and store it in machine-readable formats without worrying about technical details. With rich configuration options, users can customize almost any aspect of Crawlee to suit their project's needs.
tribe
Tribe AI is a low code tool designed to rapidly build and coordinate multi-agent teams. It leverages the langgraph framework to customize and coordinate teams of agents, allowing tasks to be split among agents with different strengths for faster and better problem-solving. The tool supports persistent conversations, observability, tool calling, human-in-the-loop functionality, easy deployment with Docker, and multi-tenancy for managing multiple users and teams.
open-source-slack-ai
This repository provides a ready-to-run basic Slack AI solution that allows users to summarize threads and channels using OpenAI. Users can generate thread summaries, channel overviews, channel summaries since a specific time, and full channel summaries. The tool is powered by GPT-3.5-Turbo and an ensemble of NLP models. It requires Python 3.8 or higher, an OpenAI API key, Slack App with associated API tokens, Poetry package manager, and ngrok for local development. Users can customize channel and thread summaries, run tests with coverage using pytest, and contribute to the project for future enhancements.
Whisper-TikTok
Discover Whisper-TikTok, an innovative AI-powered tool that leverages the prowess of Edge TTS, OpenAI-Whisper, and FFMPEG to craft captivating TikTok videos. Whisper-TikTok effortlessly generates accurate transcriptions from audio files and integrates Microsoft Edge Cloud Text-to-Speech API for vibrant voiceovers. The program orchestrates the synthesis of videos using a structured JSON dataset, generating mesmerizing TikTok content in minutes.
gemini_multipdf_chat
Gemini PDF Chatbot is a Streamlit-based application that allows users to chat with a conversational AI model trained on PDF documents. The chatbot extracts information from uploaded PDF files and answers user questions based on the provided context. It features PDF upload, text extraction, conversational AI using the Gemini model, and a chat interface. Users can deploy the application locally or to the cloud, and the project structure includes main application script, environment variable file, requirements, and documentation. Dependencies include PyPDF2, langchain, Streamlit, google.generativeai, and dotenv.
qrev
QRev is an open-source alternative to Salesforce, offering AI agents to scale sales organizations infinitely. It aims to provide digital workers for various sales roles or a superagent named Qai. The tech stack includes TypeScript for frontend, NodeJS for backend, MongoDB for app server database, ChromaDB for vector database, SQLite for AI server SQL relational database, and Langchain for LLM tooling. The tool allows users to run client app, app server, and AI server components. It requires Node.js and MongoDB to be installed, and provides detailed setup instructions in the README file.
extensionOS
Extension | OS is an open-source browser extension that brings AI directly to users' web browsers, allowing them to access powerful models like LLMs seamlessly. Users can create prompts, fix grammar, and access intelligent assistance without switching tabs. The extension aims to revolutionize online information interaction by integrating AI into everyday browsing experiences. It offers features like Prompt Factory for tailored prompts, seamless LLM model access, secure API key storage, and a Mixture of Agents feature. The extension was developed to empower users to unleash their creativity with custom prompts and enhance their browsing experience with intelligent assistance.
For similar tasks
classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.
lollms-webui
LoLLMs WebUI (Lord of Large Language Multimodal Systems: One tool to rule them all) is a user-friendly interface to access and utilize various LLM (Large Language Models) and other AI models for a wide range of tasks. With over 500 AI expert conditionings across diverse domains and more than 2500 fine tuned models over multiple domains, LoLLMs WebUI provides an immediate resource for any problem, from car repair to coding assistance, legal matters, medical diagnosis, entertainment, and more. The easy-to-use UI with light and dark mode options, integration with GitHub repository, support for different personalities, and features like thumb up/down rating, copy, edit, and remove messages, local database storage, search, export, and delete multiple discussions, make LoLLMs WebUI a powerful and versatile tool.
daily-poetry-image
Daily Chinese ancient poetry and AI-generated images powered by Bing DALL-E-3. GitHub Action triggers the process automatically. Poetry is provided by Today's Poem API. The website is built with Astro.
InvokeAI
InvokeAI is a leading creative engine built to empower professionals and enthusiasts alike. Generate and create stunning visual media using the latest AI-driven technologies. InvokeAI offers an industry leading Web Interface, interactive Command Line Interface, and also serves as the foundation for multiple commercial products.
LocalAI
LocalAI is a free and open-source OpenAI alternative that acts as a drop-in replacement REST API compatible with OpenAI (Elevenlabs, Anthropic, etc.) API specifications for local AI inferencing. It allows users to run LLMs, generate images, audio, and more locally or on-premises with consumer-grade hardware, supporting multiple model families and not requiring a GPU. LocalAI offers features such as text generation with GPTs, text-to-audio, audio-to-text transcription, image generation with stable diffusion, OpenAI functions, embeddings generation for vector databases, constrained grammars, downloading models directly from Huggingface, and a Vision API. It provides a detailed step-by-step introduction in its Getting Started guide and supports community integrations such as custom containers, WebUIs, model galleries, and various bots for Discord, Slack, and Telegram. LocalAI also offers resources like an LLM fine-tuning guide, instructions for local building and Kubernetes installation, projects integrating LocalAI, and a how-tos section curated by the community. It encourages users to cite the repository when utilizing it in downstream projects and acknowledges the contributions of various software from the community.
StableSwarmUI
StableSwarmUI is a modular Stable Diffusion web user interface that emphasizes making power tools easily accessible, high performance, and extensible. It is designed to be a one-stop-shop for all things Stable Diffusion, providing a wide range of features and capabilities to enhance the user experience.
civitai
Civitai is a platform where people can share their stable diffusion models (textual inversions, hypernetworks, aesthetic gradients, VAEs, and any other crazy stuff people do to customize their AI generations), collaborate with others to improve them, and learn from each other's work. The platform allows users to create an account, upload their models, and browse models that have been shared by others. Users can also leave comments and feedback on each other's models to facilitate collaboration and knowledge sharing.
generative-ai-python
The Google AI Python SDK is the easiest way for Python developers to build with the Gemini API. The Gemini API gives you access to Gemini models created by Google DeepMind. Gemini models are built from the ground up to be multimodal, so you can reason seamlessly across text, images, and code.
For similar jobs
sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.
teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.
ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.
classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.
chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.
BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students
uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.
griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.