Crowd Monitoring

AI-Based Real-Time Crowd Monitoring and Human Activity Recognition on Qualcomm QIDK

Project Description

This project focuses on developing a real-time, edge-based crowd monitoring system using advanced computer vision and deep learning techniques deployed on the Qualcomm Intelligent Development Kit (QIDK). The system leverages fine-tuned YOLO models for accurate human detection and integrates tracking and activity recognition modules to analyze crowd behavior in dynamic environments.

By utilizing the heterogeneous computing capabilities of QIDK, including CPU, GPU, and NPU, the system is designed to perform efficient, low-latency inference directly on the edge device without reliance on cloud infrastructure. This ensures improved privacy, reduced network dependency, and faster response times.

The proposed system processes live or recorded video streams to detect individuals, track their movements across frames, and classify their activities such as walking, standing, running, or forming groups. Additionally, it provides crowd-level insights such as density estimation and people counting, enabling applications in smart surveillance, public safety, traffic management, and event monitoring. The overall objective is to build a scalable and optimized edge AI solution capable of delivering real-time analytics for intelligent crowd management.

Features

  • Real-Time Human Detection

    Fine-tuned YOLO models for accurate and fast person detection optimized for edge deployment on QIDK

  • Multi-Object Tracking

    Tracks individuals across frames using lightweight algorithms with consistent IDs for behavior analysis

  • Human Activity Recognition

    Classifies activities such as walking, standing, running, and crowd gathering with extensibility for abnormal detection

  • Crowd Analytics

    Real-time people counting, crowd density estimation, and activity distribution analysis

  • Edge AI Optimization

    Efficient utilization of NPU, GPU, and CPU with low latency and high throughput for on-device processing

  • Visualization & Monitoring

    Bounding boxes with activity labels and real-time overlay on video streams with optional dashboard integration

  • Scalable Architecture

    Supports multiple camera inputs and easily extendable for smart city applications

  • Privacy-Preserving Processing

    Fully on-device processing with no dependency on cloud data transfer for enhanced security

Waste Segregation

Edge-AI Based Real-Time Waste Segregation System using QIDK

Project Description

This project presents an intelligent waste segregation system that leverages Edge AI to automatically classify and separate waste into wet and dry categories at the point of disposal. The system integrates a camera-based sensing module with a fine-tuned YOLO model deployed on the Qualcomm QIDK platform to perform real-time inference directly on-device.

Captured images of waste are processed using robust preprocessing techniques to handle variations in lighting and environmental noise. The trained model classifies waste items, and the results are used to control an ESP-based actuation mechanism that physically directs waste into the appropriate compartment of a dual-bin system.

By performing inference at the edge, the system eliminates dependency on cloud infrastructure, ensuring low latency, enhanced privacy, and reliable operation. The solution is scalable, energy-efficient, and well-suited for smart city deployments, promoting sustainable waste management practices.

Features

  • Real-Time Waste Classification

    YOLO-based detection for wet and dry waste with accurate classification at disposal point

  • Edge AI Deployment

    On-device inference using QIDK with no cloud dependency

  • Automated Segregation Mechanism

    ESP-based control system for automatic routing into dual compartments

  • Robust Image Processing

    Handles lighting variations and noise to improve classification reliability

  • Low-Latency Decision Making

    Instant classification and actuation for immediate waste routing

  • Scalable Smart City Solution

    Suitable for public infrastructure deployment across urban environments

  • Privacy-Preserving System

    No external data transmission, ensuring user and operational privacy

Smart Language Models

Edge-Optimized Small Language Models for Smart City Intelligence on QIDK8750

Project Description

This project focuses on enabling intelligent decision-making in smart city environments using edge-optimized Small Language Models (SLMs) deployed on the Qualcomm QIDK8750 platform. The system processes heterogeneous IoT data streams such as air quality, water monitoring, energy usage, and occupancy, transforming them into structured insights for contextual reasoning.

A quantized SLM is integrated with an edge analytics pipeline to perform natural language understanding and generate meaningful summaries and responses without relying on cloud-based services. The model is optimized using ONNX conversion and Qualcomm's AI acceleration framework to ensure efficient execution on the Snapdragon NPU.

The system supports real-time, privacy-preserving conversational interaction through an Android-based interface, allowing users to query city data using natural language. By combining edge analytics with language intelligence, the solution enables scalable, low-latency, and offline-capable smart city applications.

Features

  • Edge-Optimized Language Intelligence

    Small Language Model (SLM) deployed on-device with contextual reasoning and query answering capabilities

  • Multi-Source IoT Data Integration

    Processes data from air, water, energy, and occupancy sensors with structured summaries

  • NPU-Accelerated Inference

    Quantized models (INT8/INT4) with efficient execution on Snapdragon NPU

  • Low-Latency Conversational Interface

    Real-time natural language interaction with Android-based user interface

  • Privacy-Preserving Processing

    Fully offline capability with no cloud dependency for enhanced privacy

  • Model Optimization Techniques

    ONNX conversion and Qualcomm QNN deployment with memory and latency optimization

  • Scalable Smart City Deployment

    Applicable across multiple urban domains for intelligent city management

Home Automation

Edge AI-Powered Emotionally Aware Home Automation System

Project Description

The Emotion-Based Home Automation System is an intelligent smart home solution that adapts the indoor environment based on the user's emotional state. By leveraging computer vision and machine learning techniques, the system detects human emotions in real time and automatically controls smart devices such as lights, alexa, fans, air conditioners, and ambient lighting.

Implemented using the QIDK development kit, the system enhances user comfort, personalization, and overall living experience by creating a responsive and emotionally aware environment. The solution combines advanced emotion recognition algorithms with smart device integration to provide seamless, context-aware home automation.

Features

  • Real-Time Emotion Detection

    Uses camera input and AI models to identify emotions such as happy, sad, angry, and surprise etc.

  • CPU & GPU-Based Inference with Low Latency

    The system performs model inference on both CPU and GPU, ensuring efficient utilization of hardware resources. GPU acceleration is used for faster parallel processing of deep learning models.

  • Automated Device Control

    Dynamically controls ambience lights, fans, AC, and Alexa based on detected emotions

  • Smart Appliance Connectivity

    Supports integration with IoT-enabled devices including smart AC and automated windows

  • Real-Time Processing

    Provides immediate response with minimal latency using optimized models

  • Modular & Scalable Design

    Easily extendable to include additional devices like speakers and other smart appliances

  • User-Friendly Interface

    Simple interaction through mobile application for easy control and configuration