Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings
/ toxblock Public

A professional TypeScript module that uses Google's Gemini AI to detect profanity, toxic content, and inappropriate language in all languages. Built with enterprise-grade quality, comprehensive testing, and full TypeScript support.

License

Notifications You must be signed in to change notification settings

sw3do/toxblock

Repository files navigation

ToxBlock πŸ›‘οΈ

npm version CI/CD Pipeline License: MIT

A professional TypeScript module that uses Google's Gemini AI to detect profanity, toxic content, and inappropriate language in all languages. Built with enterprise-grade quality, comprehensive testing, and full TypeScript support.

✨ Features

  • 🌍 Multilingual Support - Detects profanity in all languages
  • πŸ€– Powered by Gemini AI - Leverages Google's advanced AI for accurate detection
  • πŸ“ Full TypeScript Support - Complete type definitions and IntelliSense
  • πŸ§ͺ Comprehensive Testing - 100% test coverage with Jest
  • πŸ“š Extensive Documentation - JSDoc comments and TypeDoc generation
  • πŸ”’ Enterprise Ready - Professional error handling and logging
  • ⚑ High Performance - Optimized for speed and efficiency
  • πŸ›‘οΈ Type Safe - Strict TypeScript configuration

πŸ“¦ Installation

npm install toxblock
yarn add toxblock
pnpm add toxblock

πŸš€ Quick Start

import { ToxBlock } from 'toxblock';
// Initialize with your Gemini API key
const toxBlock = new ToxBlock({
 apiKey: 'your-gemini-api-key'
});
// Check a single text
const result = await toxBlock.checkText('Hello, how are you?');
console.log(result.isProfane); // false
console.log(result.confidence); // 0.95
// Check multiple texts
const results = await toxBlock.checkTexts([
 'Hello world',
 'Some text to check'
]);

πŸ”§ Configuration

import { ToxBlock, ToxBlockConfig } from 'toxblock';
const config: ToxBlockConfig = {
 apiKey: 'your-gemini-api-key',
 model: 'gemini-pro', // Optional: default is 'gemini-pro'
 timeout: 10000, // Optional: default is 10000ms
 customPrompt: 'Your custom prompt template' // Optional
};
const toxBlock = new ToxBlock(config);

πŸ“– API Reference

ToxBlock

Main class for profanity detection.

Constructor

new ToxBlock(config: ToxBlockConfig)

Methods

checkText(text: string): Promise<ToxBlockResult>

Analyzes a single text for profanity.

Parameters:

  • text (string): The text to analyze

Returns: Promise resolving to ToxBlockResult

Example:

const result = await toxBlock.checkText('Sample text');
if (result.isProfane) {
 console.log('Profanity detected!');
}
checkTexts(texts: string[]): Promise<ToxBlockResult[]>

Analyzes multiple texts in batch.

Parameters:

  • texts (string[]): Array of texts to analyze

Returns: Promise resolving to array of ToxBlockResult

getConfig(): { model: string; timeout: number }

Returns current configuration.

Types

ToxBlockConfig

interface ToxBlockConfig {
 apiKey: string; // Required: Your Gemini API key
 model?: string; // Optional: Model name (default: 'gemini-pro')
 timeout?: number; // Optional: Timeout in ms (default: 10000)
 customPrompt?: string; // Optional: Custom prompt template
}

ToxBlockResult

interface ToxBlockResult {
 isProfane: boolean; // Whether text contains profanity
 confidence: number; // Confidence score (0-1)
 language?: string; // Detected language
 details?: string; // Additional details
}

ToxBlockError

class ToxBlockError extends Error {
 code: string; // Error code
 originalError?: Error; // Original error if any
}

🌍 Multilingual Examples

// English
const result1 = await toxBlock.checkText('Hello world');
// Spanish
const result2 = await toxBlock.checkText('Hola mundo');
// French
const result3 = await toxBlock.checkText('Bonjour le monde');
// Japanese
const result4 = await toxBlock.checkText('γ“γ‚“γ«γ‘γ―δΈ–η•Œ');
// Arabic
const result5 = await toxBlock.checkText('Ω…Ψ±Ψ­Ψ¨Ψ§ Ψ¨Ψ§Ω„ΨΉΨ§Ω„Ω…');
// All will return appropriate ToxBlockResult objects

πŸ› οΈ Advanced Usage

Custom Prompt Template

const customPrompt = `
Analyze this text for inappropriate content: "{TEXT}"
Return JSON with isProfane (boolean) and confidence (0-1).
`;
const toxBlock = new ToxBlock({
 apiKey: 'your-api-key',
 customPrompt
});

Error Handling

try {
 const result = await toxBlock.checkText('Sample text');
 console.log(result);
} catch (error) {
 if (error instanceof ToxBlockError) {
 console.error(`ToxBlock Error [${error.code}]: ${error.message}`);
 if (error.originalError) {
 console.error('Original error:', error.originalError);
 }
 }
}

Batch Processing

const texts = [
 'First message',
 'Second message',
 'Third message'
];
const results = await toxBlock.checkTexts(texts);
results.forEach((result, index) => {
 console.log(`Text ${index + 1}: ${result.isProfane ? 'FLAGGED' : 'CLEAN'}`);
});

πŸ§ͺ Testing

# Run all tests
npm test
# Run tests with coverage
npm run test:coverage
# Run tests in watch mode
npm run test:watch
# Run integration tests (requires GEMINI_API_KEY)
GEMINI_API_KEY=your-key npm test

πŸ”¨ Development

# Clone the repository
git clone https://github.com/sw3do/toxblock.git
cd toxblock
# Install dependencies
npm install
# Run in development mode
npm run dev
# Build the project
npm run build
# Run linting
npm run lint
# Fix linting issues
npm run lint:fix
# Format code
npm run format
# Generate documentation
npm run docs

πŸ“‹ Requirements

  • Node.js >= 16.0.0
  • Google Gemini API key
  • TypeScript >= 5.0.0 (for development)

πŸ”‘ Getting a Gemini API Key

  1. Visit Google AI Studio
  2. Sign in with your Google account
  3. Create a new API key
  4. Copy the key and use it in your configuration

πŸ“„ License

MIT License - see the LICENSE file for details.

🀝 Contributing

Contributions are welcome! Please read our Contributing Guide for details on our code of conduct and the process for submitting pull requests.

πŸ“ž Support

πŸ™ Acknowledgments

  • Google Gemini AI for providing the underlying AI capabilities
  • The TypeScript community for excellent tooling
  • All contributors who help improve this project

Made with ❀️ by the Sw3doo

About

A professional TypeScript module that uses Google's Gemini AI to detect profanity, toxic content, and inappropriate language in all languages. Built with enterprise-grade quality, comprehensive testing, and full TypeScript support.

Resources

License

Contributing

Stars

Watchers

Forks

Packages

No packages published

AltStyle γ«γ‚ˆγ£γ¦ε€‰ζ›γ•γ‚ŒγŸγƒšγƒΌγ‚Έ (->γ‚ͺγƒͺγ‚ΈγƒŠγƒ«) /