A two-phase pipeline fusing classical filters with CNN backbones. DenseNet achieved 92.5% CIFAR-10 accuracy using 30% fewer parameters than the baseline.
This project investigates how classical computer vision cues can complement and improve deep learning classifiers. A two-phase pipeline was built: Phase 1 uses classical filter banks to produce robust boundary probability maps, and Phase 2 benchmarks CNN architectures using those maps as additional input features.
Phase 1 – PbLite Boundary Maps: A filter bank combining Difference-of-Gaussian, Leung-Malik, Gabor, half-disk, Canny, and Sobel filters is applied to compute oriented edge responses. These are combined into a Probability-of-Boundary (PbLite) map, providing rich structural priors.
Phase 2 – CNN Benchmarking: ResNet variants, DenseNet, and a custom lightweight CNN are trained and benchmarked on CIFAR-10 — both with and without PbLite boundary maps as additional input channels.
Parameter Efficiency: DenseNet's dense connectivity reuses feature maps aggressively, allowing it to achieve top accuracy at significantly lower parameter counts than ResNet baselines.
DenseNet achieved 92.5% accuracy on CIFAR-10 using approximately 30% fewer parameters than the ResNet baseline. Integration of PbLite boundary maps improved edge-case classification accuracy, demonstrating the complementary value of classical and learned features.