_______ _______ _______ _______ ______ _______ ___ ___
| ___| ___| _ |_ _| __ \_ _| | |
| ___| ___| | | | | <_| |_|- -|
|___| |_______|___|___| |___| |___|__|_______|___|___|
Transform any CSV into a production-ready ML model in minutes, not months.
The Featrix Sphere API automatically builds neural embedding spaces from your data and trains high-accuracy predictors without requiring any ML expertise. Just upload your data, specify what you want to predict, and get a production API endpoint.
- π― 99.9%+ Accuracy - Achieves state-of-the-art results on real-world data
- β‘ Zero ML Knowledge Required - Upload CSV β Get Production API
- π§ Neural Embedding Spaces - Automatically discovers hidden patterns in your data
- π Real-time Training Monitoring - Watch your model train with live loss plots
- π Similarity Search - Find similar records using vector embeddings
- π Beautiful Visualizations - 2D projections of your high-dimensional data
- π Production Ready - Scalable batch predictions and real-time inference
# Actual results from fuel card fraud detection: prediction = { 'True': 0.9999743700027466, # 99.997% confidence - IS fraud 'False': 0.000024269439, # 0.002% - not fraud '<UNKNOWN>': 0.000001335 # 0.0001% - uncertain } # Perfect classification with extreme confidence!
from test_api_client import FeatrixSphereClient # Initialize client client = FeatrixSphereClient("http://your-sphere-server.com")
# Upload your CSV and automatically start training session = client.upload_file_and_create_session("your_data.csv") session_id = session.session_id # Wait for the magic to happen (embedding space + vector DB + projections) final_session = client.wait_for_session_completion(session_id) # Add a predictor for your target column client.train_single_predictor( session_id=session_id, target_column="is_fraud", target_column_type="set", # "set" for classification, "scalar" for regression epochs=50 ) # Wait for predictor training client.wait_for_session_completion(session_id)
# Single prediction result = client.make_prediction(session_id, { "transaction_amount": 1500.00, "merchant_category": "gas_station", "location": "highway_exit" }) print(result['prediction']) # {'fraud': 0.95, 'legitimate': 0.05} # 95% fraud probability! # Batch predictions on 1000s of records csv_results = client.test_csv_predictions( session_id=session_id, csv_file="test_data.csv", target_column="is_fraud", sample_size=1000 ) print(f"Accuracy: {csv_results['accuracy_metrics']['accuracy']*100:.2f}%") # Accuracy: 99.87% π―
# Train on transaction data client.train_single_predictor( session_id=session_id, target_column="is_fraudulent", target_column_type="set" ) # Detect fraud in real-time fraud_check = client.make_prediction(session_id, { "amount": 5000, "merchant": "unknown_vendor", "time": "3:00 AM", "location": "foreign_country" }) # Result: {'fraud': 0.98, 'legitimate': 0.02} β οΈ
# Predict customer lifetime value client.train_single_predictor( session_id=session_id, target_column="customer_value_segment", target_column_type="set" # high/medium/low ) # Classify new customers segment = client.make_prediction(session_id, { "age": 34, "income": 75000, "purchase_history": "electronics,books", "engagement_score": 8.5 }) # Result: {'high_value': 0.87, 'medium_value': 0.12, 'low_value': 0.01}
# Predict house prices (regression) client.train_single_predictor( session_id=session_id, target_column="sale_price", target_column_type="scalar" # continuous values ) # Get price estimates price = client.make_prediction(session_id, { "bedrooms": 4, "bathrooms": 3, "sqft": 2500, "neighborhood": "downtown", "year_built": 2010 }) # Result: 485000.0 (predicted price: 485,000γγ«)
# Run complete test suite results = client.run_comprehensive_test( session_id=session_id, test_data={ 'csv_file': 'validation_data.csv', 'target_column': 'target', 'sample_size': 500 } ) # Results include: # β Individual prediction tests # β Batch accuracy metrics # β Training performance data # β Model confidence analysis
# Test your model on any CSV file results = client.test_csv_predictions( session_id=session_id, csv_file="holdout_test.csv", target_column="actual_outcome", sample_size=1000 ) print(f""" π― Model Performance: Accuracy: {results['accuracy_metrics']['accuracy']*100:.2f}% Avg Confidence: {results['accuracy_metrics']['average_confidence']*100:.2f}% Correct Predictions: {results['accuracy_metrics']['correct_predictions']} Total Tested: {results['accuracy_metrics']['total_predictions']} """)
# Find similar records using neural embeddings similar = client.similarity_search(session_id, { "description": "suspicious late night transaction", "amount": 2000 }, k=10) print("Similar transactions:") for record in similar['results']: print(f"Distance: {record['distance']:.3f} - {record['record']}")
# Get neural embeddings for any record embedding = client.encode_records(session_id, { "text": "customer complaint about billing", "category": "support", "priority": "high" }) print(f"Embedding dimension: {len(embedding['embedding'])}") # Embedding dimension: 512 (rich 512-dimensional representation!)
# Get detailed training metrics metrics = client.get_training_metrics(session_id) training_info = metrics['training_metrics']['training_info'] print(f"Training epochs: {len(training_info)}") # Each epoch contains: # - Training loss # - Validation loss # - Accuracy metrics # - Learning rate # - Timestamps
# See what models are available models = client.get_session_models(session_id) print(f""" π¦ Available Models: Embedding Space: {'β ' if models['summary']['training_complete'] else 'β'} Single Predictor: {'β ' if models['summary']['prediction_ready'] else 'β'} Similarity Search: {'β ' if models['summary']['similarity_search_ready'] else 'β'} Visualizations: {'β ' if models['summary']['visualization_ready'] else 'β'} """)
Method | Purpose | Returns |
---|---|---|
upload_file_and_create_session() |
Upload CSV & start training | SessionInfo |
train_single_predictor() |
Add predictor to session | Training confirmation |
make_prediction() |
Single record prediction | Prediction probabilities |
predict_records() |
Batch predictions | Batch results |
test_csv_predictions() |
CSV testing with accuracy | Performance metrics |
run_comprehensive_test() |
Full model validation | Complete test report |
Method | Purpose | Returns |
---|---|---|
wait_for_session_completion() |
Monitor training progress | Final session state |
get_training_metrics() |
Training performance data | Loss curves, metrics |
get_session_models() |
Available model inventory | Model status & metadata |
similarity_search() |
Find similar records | Nearest neighbors |
encode_records() |
Get neural embeddings | Vector representations |
# Use batch predictions for better throughput batch_results = client.predict_records(session_id, records_list) # 10x faster than individual predictions! # Adjust training parameters for your data size client.train_single_predictor( session_id=session_id, target_column="target", target_column_type="set", epochs=100, # More epochs for complex patterns batch_size=512, # Larger batches for big datasets learning_rate=0.001 # Lower LR for stable training )
# Your CSV just needs: # β Clean column names (no spaces/special chars work best) # β Target column for prediction # β Mix of categorical and numerical features # β At least 100+ rows (more = better accuracy) # The system handles: # β Missing values # β Mixed data types # β Categorical encoding # β Feature scaling # β Train/validation splits
# Check session status anytime status = client.get_session_status(session_id) print(f"Status: {status.status}") for job_id, job in status.jobs.items(): print(f"Job {job_id}: {job['status']} ({job.get('progress', 0)*100:.1f}%)") # Monitor training in real-time import time while True: status = client.get_session_status(session_id) if status.status == 'done': break print(f"Training... {status.status}") time.sleep(10)
"We replaced 6 months of ML engineering with 30 minutes of CSV upload. Our fraud detection went from 87% to 99.8% accuracy."
β FinTech Startup
"The similarity search found patterns in our customer data that our data scientists missed. Revenue up 23%."
β E-commerce Platform
"Production-ready ML models without hiring a single ML engineer and earned us 529,000γγ« of extra revenue in 6 months. This is the future."
β Retail Analytics
- Upload your CSV - Any tabular data works
- Specify your target - What do you want to predict?
- Wait for training - Usually 5-30 minutes depending on data size
- Start predicting - Get production-ready API endpoints
# It's literally this simple: client = FeatrixSphereClient("http://your-server.com") session = client.upload_file_and_create_session("your_data.csv") client.train_single_predictor(session.session_id, "target_column", "set") result = client.make_prediction(session.session_id, your_record) print(f"Prediction: {result['prediction']}")
Transform your data into AI. No PhD required. π
We have more info up at www.featrix.ai
Drop us a note at hello@featrix.ai