Best Practices & Troubleshooting
Essential tips for creating effective models and solving common issues.
π Data Collectionβ
Diversity is Keyβ
Capture images representing the full range of conditions:
| Priority | What to Capture | Why |
|---|---|---|
| π― High | Various angles (high, low, side) | Different robot viewpoints |
| π― High | Multiple distances (near, medium, far) | Detection at all ranges |
| π― High | Different orientations | Objects appear rotated |
| π― High | Partial occlusions | Real-world scenarios |
| π― High | Background variations | Different field locations |
| β οΈ Low | Lighting conditions | YOLO-Pro handles this well* |
*Only capture specific lighting if experiencing actual detection failures.
Quality Over Quantity
100 diverse images > 500 similar images
Dataset Structureβ
β
Good Dataset:
βββ Training (80%)
β βββ Multiple angles & distances
β βββ Various orientations
β βββ Different conditions
βββ Testing (20%)
βββ Never seen during training
β Bad Dataset:
βββ Training (80%)
β βββ All similar images
βββ Testing (20%)
βββ Images from training set
π Model Trainingβ
Start Simpleβ
- Use default parameters
- Train for 100 epochs first
- Only increase complexity if needed
Why? Simple models train faster and are easier to debug.
Monitor Overfittingβ
| Metric | Training | Testing | Status |
|---|---|---|---|
| Accuracy | 95%+ | <70% | β Overfitting |
| Accuracy | 85% | 82% | β Good |
Solutions:
- Add more diverse data
- Enable data augmentation
- Reduce training epochs
Data Augmentation Guideβ
| Augmentation | Setting | Use When |
|---|---|---|
| Brightness | Β±20% | Always recommended |
| Rotation | Β±15Β° | Objects can tilt |
| Zoom | 90-110% | Distance varies |
| Flip | Horizontal/Vertical | Objects can flip |
Don't Over-Augment
Keep variations realistic. A cone won't appear upside down! (you may not want to pick it up anyways if it is ;))
Training Parametersβ
Epochs:
βββ Start: 100
βββ Good performance: Stop
βββ Underfitting: 200-300
βββ Overfitting: 50-75
Learning Rate:
βββ Default: 0.001 (usually optimal)
βββ Unstable: 0.0001
βββ Too slow: 0.005 (careful!)
π§ Troubleshootingβ
| Issue | Symptoms | Solutions |
|---|---|---|
| Low Accuracy (<70%) | Many missed detections, poor test performance | β’ Add diverse training images (focus on failures) β’ Check label quality and consistency β’ Increase training epochs (200-300) β’ Check image quality (blur, focus, lighting) |
| False Positives | Detections where no objects exist | β’ Increase confidence threshold (0.5 β 0.6) β’ Add negative examples (images without objects) β’ Tighten bounding boxes in training β’ Check for labeling inconsistencies |
| Missing Objects | Objects clearly visible but not detected | β’ Lower confidence threshold (0.5 β 0.3) β’ Check if objects are too small β’ Verify camera settings (focus, exposure) β’ Add more examples of missed object types |
π Resourcesβ
Official Documentationβ
Toolsβ
- Netron - Visualize models
π‘ Final Thoughtsβ
Success in FIRST Robotics AI:
- π― Systematic Approach - Follow best practices consistently
- π Continuous Iteration - Each competition improves your model
Remember
The goal isn't perfectionβit's continuous improvement.
Each iteration makes your system better. Each competition provides valuable data for the next version.
Good luck, and may your detections be accurate and your inference be fast! π€