Low accuracy (~50%) with SSL (BYOL/MAE/VICReg) on hyperspectral crop stress data — what am I missing? [R]
Summary
A researcher shares their struggle with achieving only ~50% accuracy using SSL methods (BYOL, MAE, VICReg) for hyperspectral crop stress classification on cabbage nitrogen deficiency detection, seeking advice on SSL techniques, feature engineering, and model architectures better suited for spectral data.
Similar Articles
Modeling Sparse and Bursty Vulnerability Sightings: Forecasting Under Data Constraints
Academic study compares SARIMAX and Poisson regression for forecasting sparse, bursty vulnerability-sighting time-series, finding count-based models more stable.
What should i do to have a good OD model?[P]
A user is seeking advice on improving their object detection model trained with YOLO11n for deployment on a Raspberry Pi 5, struggling with the gap between theoretical mAP50 metrics and practical detection performance.
EdgeDetect: Importance-Aware Gradient Compression with Homomorphic Aggregation for Federated Intrusion Detection
EdgeDetect is a federated intrusion detection system for 6G-IoT environments that combines importance-aware gradient binarization (32× compression) with Paillier homomorphic encryption to achieve 98% accuracy on CIC-IDS2017 while reducing communication overhead by 96.9% and enabling deployment on resource-constrained devices like Raspberry Pi 4.
One line system prompt change dropped model quality from 84% to 52%. How are people monitoring semantic quality in production?
A developer shares their experience of a single system prompt change degrading LLM response quality without triggering traditional monitoring alerts, and describes internal tooling they built to monitor semantic quality in production LLM applications.
HyperLens: Quantifying Cognitive Effort in LLMs with Fine-grained Confidence Trajectory
This paper introduces HyperLens, a high-resolution probe to quantify cognitive effort in LLMs by tracing fine-grained confidence trajectories across layers. It reveals that complex tasks require higher cognitive effort and demonstrates how Supervised Fine-Tuning can reduce this effort, potentially degrading performance.