🧠 AI Computer Institute
Content is AI-generated for educational purposes. Verify critical information independently. A bharath.ai initiative.

Time Series Analysis with Python

📚 Data Analysis Techniques⏱️ 16 min read🎓 Grade 9

📋 Before You Start

To get the most from this chapter, you should be comfortable with: foundational concepts in computer science, basic problem-solving skills

Time Series Analysis with Python

Every stock market trader, meteorologist, and power company planner faces the same challenge: predict the future using the past. Stock prices tomorrow depend on prices today and yesterday. Tomorrow's monsoon intensity depends on seasonal patterns. Next hour's electricity demand depends on time of day and weather. Time series forecasting is predicting values that depend on time.

What Is a Time Series?

Data where each observation has a timestamp and typically follows temporal order:

Date          Temperature  Humidity  Rainfall
2025-01-01    25°C         65%       0mm
2025-01-02    26°C         70%       0mm
2025-01-03    23°C         75%       5mm
...
2025-02-11    28°C         60%       0mm

Or financial data:
Date Time      Stock Price   Volume
2025-02-11 09:15   45,230    100K shares
2025-02-11 09:30   45,350    150K shares
2025-02-11 09:45   45,280    120K shares

Components of Time Series

Every time series has patterns:

Price = Trend + Seasonality + Noise

Trend: Long-term direction (up, down, flat)
  Stock price: ₹100 → ₹120 → ₹150 (uptrend)

Seasonality: Repeating patterns (daily, weekly, yearly)
  Ice cream sales: High in summer, low in winter
  Web traffic: High during day, low at night
  Indian monsoon: June-September every year

Noise: Random fluctuations
  Unexpected event (market crash, festival sale)

Visual:
  Price
    |               /← Trend (upward)
    |            /
    |         /
    |      /___/―___/― ← Seasonality (repeating)
    |   /
    | /
    +────────→ Time

Seasonal Decomposition in Python

Separate these components:

import pandas as pd
from statsmodels.tsa.seasonal import seasonal_decompose

# Load time series data
data = pd.read_csv('daily_temperature.csv', parse_dates=['date'])
data.set_index('date', inplace=True)

# Decompose
decomposition = seasonal_decompose(data['temperature'], model='additive', period=365)

# Extract components
trend = decomposition.trend
seasonal = decomposition.seasonal
residual = decomposition.resid

# Visualize
decomposition.plot()

Result:
  Original series
  Trend (smooth line showing overall direction)
  Seasonal (repeating pattern, e.g., summer peaks)
  Residual (noise, randomness)

Period=365 because temperatures repeat yearly. For daily data, period=7 (weekly repeat).

ARIMA: Classical Forecasting

ARIMA = AutoRegressive Integrated Moving Average. The classic approach before deep learning:

AR (AutoRegressive): Current value depends on past values
  Price_today = c + α₁×Price_yesterday + α₂×Price_2days_ago + noise

I (Integrated): Take differences to make series stationary
  Stationary: No trend, no seasonality
  Example: If price is ₹100, ₹105, ₹110, ₹115 (trending)
  Take differences: 5, 5, 5 (no trend, stationary!)

MA (Moving Average): Current value depends on past errors
  Price_today = c + β₁×Error_yesterday + β₂×Error_2days_ago

ARIMA(p, d, q):
  p = number of AR terms (how many past values)
  d = number of differences (to make stationary)
  q = number of MA terms (how many past errors)

Example: ARIMA(2, 1, 2)
  Use 2 past values, take 1 difference, use 2 past errors
import pandas as pd
from statsmodels.tsa.arima.model import ARIMA

# Time series data
data = pd.read_csv('stock_prices.csv', parse_dates=['date'])

# Fit ARIMA(2, 1, 2)
model = ARIMA(data['price'], order=(2, 1, 2))
results = model.fit()

# Forecast next 10 days
forecast = results.get_forecast(steps=10)
print(forecast.summary_table())

# Actual vs predicted
results.plot_diagnostics()

Problem: Requires manual order selection (p, d, q). How do you know which values to use?

Auto ARIMA: Automatic Selection

from statsmodels.tsa.statespace.sarimax import SARIMAX
from pmdarima import auto_arima

# Auto-select best (p, d, q)
model = auto_arima(
    data['price'],
    seasonal=True,
    m=12,  # Monthly seasonality
    suppress_warnings=True
)

print(model.summary())
# Output: ARIMA(1, 1, 2)×(0, 1, 1, 12) RMSE: 2.5

# Forecast
forecast = model.predict(n_periods=10)

Real-World Example: Indian Stock Market (BSE Sensex)

Data: Daily Sensex closing price, 2020-2025

Decomposition results:
  Trend: +15% over 5 years (generally upward)
  Seasonal: Small peaks around March, October (fiscal year effects)
  Noise: COVID crash (March 2020), recovery (2021-2022)

ARIMA(3, 1, 2) forecasting:
  Next 5 days' predictions: [78,500, 79,200, 79,800, 78,900, 79,500]
  Actual values: [78,600, 79,150, 79,900, 78,850, 79,450]

  RMSE: 150 points (error ~0.2%, acceptable!)

Seasonality helps:
  BSE often rises before dividend announcements
  Falls during monsoon season (historically)
  Incorporating this in ARIMA improves forecast

Stationarity: Critical Concept

Most time series algorithms require stationary data (no trend, no seasonality, constant mean/variance):

Non-stationary (trending):
  [1, 2, 3, 4, 5] (obvious uptrend)

  ADF test p-value: 0.5 (p > 0.05, not stationary!)

Stationary (no trend):
  [0, 1, 0, -1, 0] (hovering around 0)

  ADF test p-value: 0.02 (p < 0.05, stationary!)

Code:
from statsmodels.tsa.stattools import adfuller

result = adfuller(data['price'])
print(f"p-value: {result[1]}")

if result[1] < 0.05:
    print("Series is stationary ✓")
else:
    print("Series is non-stationary. Apply differencing.")
    diff_data = data.diff().dropna()
    # Retry on differenced data

Prophet: User-Friendly Forecasting (Facebook)

Easier than ARIMA, handles seasonality automatically:

from fbprophet import Prophet

# Data format: df with 'ds' (date) and 'y' (value)
df = pd.DataFrame({
    'ds': pd.date_range('2025-01-01', periods=100),
    'y': [random data]
})

model = Prophet()
model.fit(df)

# Forecast
future = model.make_future_dataframe(periods=30)  # 30 days ahead
forecast = model.predict(future)

# Visualize
model.plot(forecast)

Advantages: Simple API, handles missing data, learns seasonality automatically.

Evaluating Forecasts

MAE (Mean Absolute Error):
  = Average |actual - predicted|
  = (|5-5.2| + |6-5.9| + |7-7.1|) / 3 = 0.13
  Interpretation: Off by 0.13 units on average

RMSE (Root Mean Squared Error):
  = sqrt(average (actual - predicted)²)
  Penalizes large errors more than MAE
  Use this for most cases

MAPE (Mean Absolute Percentage Error):
  = Average |actual - predicted| / actual × 100%
  = Good when scale matters (₹100 vs ₹1,00,000)

Example (Stock price forecasting):
  Day 1: Actual ₹100, Predicted ₹102
  Day 2: Actual ₹105, Predicted ₹103

  MAE = (2 + 2) / 2 = ₹2
  RMSE = sqrt((4 + 4) / 2) = 2
  MAPE = ((2/100 + 2/105) / 2) × 100% = 1.95%

Key Takeaways

  • Time series: Data with temporal dependence, must respect time order
  • Decompose: Separate trend, seasonality, and noise
  • ARIMA: Classical approach, requires stationarity
  • Stationarity: No trend, no seasonality, constant mean/variance
  • Auto ARIMA: Automatically selects best (p, d, q)
  • Prophet: Modern, user-friendly, handles seasonality
  • Evaluate with MAE, RMSE, or MAPE (not accuracy)

Challenge Section

Challenge 1: Download BSE or NSE stock data (Kaggle or yfinance). Decompose into trend/seasonal/noise. What patterns do you see?

Challenge 2: Fit ARIMA and Prophet to the same data. Which forecasts next 30 days better? By how much (RMSE)?

Challenge 3: Use Indian meteorological data to forecast monsoon rainfall. Is it predictable? What factors matter (humidity, pressure, temperature)?

Time series forecasting is predicting the future—master the techniques, and you'll forecast stock prices, weather, demand, and countless real-world phenomena.

🧪 Try This!

  1. Quick Check: Name 3 variables that could store information about your school
  2. Apply It: Write a simple program that stores your name, age, and favorite subject in variables, then prints them
  3. Challenge: Create a program that stores 5 pieces of information and performs calculations with them

From Concept to Reality: Time Series Analysis with Python

In the professional world, the difference between a good engineer and a great one often comes down to understanding fundamentals deeply. Anyone can copy code from Stack Overflow. But when that code breaks at 2 AM and your application is down — affecting millions of users — only someone who truly understands the underlying concepts can diagnose and fix the problem.

Time Series Analysis with Python is one of those fundamentals. Whether you end up working at Google, building your own startup, or applying CS to solve problems in agriculture, healthcare, or education, these concepts will be the foundation everything else is built on. Indian engineers are known globally for their strong fundamentals — this is why companies worldwide recruit from IITs, NITs, IIIT Hyderabad, and BITS Pilani. Let us make sure you have that same strong foundation.

Object-Oriented Programming: Modelling the Real World

OOP lets you model real-world entities as code "objects." Each object has properties (data) and methods (behaviour). Here is a practical example:

class BankAccount:
    """A simple bank account — like what SBI or HDFC uses internally"""

    def __init__(self, holder_name, initial_balance=0):
        self.holder = holder_name
        self.balance = initial_balance    # Private in practice
        self.transactions = []            # History log

    def deposit(self, amount):
        if amount <= 0:
            raise ValueError("Deposit must be positive")
        self.balance += amount
        self.transactions.append(f"+₹{amount}")
        return self.balance

    def withdraw(self, amount):
        if amount > self.balance:
            raise ValueError("Insufficient funds!")
        self.balance -= amount
        self.transactions.append(f"-₹{amount}")
        return self.balance

    def statement(self):
        print(f"
--- Account Statement: {self.holder} ---")
        for t in self.transactions:
            print(f"  {t}")
        print(f"  Balance: ₹{self.balance}")

# Usage
acc = BankAccount("Rahul Sharma", 5000)
acc.deposit(15000)      # Salary credited
acc.withdraw(2000)      # UPI payment to Swiggy
acc.withdraw(500)       # Metro card recharge
acc.statement()

This is encapsulation — bundling data and behaviour together. The user of BankAccount does not need to know HOW deposit works internally; they just call it. Inheritance lets you extend this: a SavingsAccount could inherit from BankAccount and add interest calculation. Polymorphism means different account types can respond to the same .withdraw() method differently (savings accounts might check minimum balance, current accounts might allow overdraft).

Did You Know?

🚀 ISRO is the world's 4th largest space agency, powered by Indian engineers. With a budget smaller than some Hollywood blockbusters, ISRO does things that cost 10x more for other countries. The Mangalyaan (Mars Orbiter Mission) proved India could reach Mars for the cost of a film. Chandrayaan-3 succeeded where others failed. This is efficiency and engineering brilliance that the world studies.

🏥 AI-powered healthcare diagnosis is being developed in India. Indian startups and research labs are building AI systems that can detect cancer, tuberculosis, and retinopathy from images — better than human doctors in some cases. These systems are being deployed in rural clinics across India, bringing world-class healthcare to millions who otherwise could not afford it.

🌾 Agriculture technology is transforming Indian farming. Drones with computer vision scan crop health. IoT sensors in soil measure moisture and nutrients. AI models predict yields and optimal planting times. Companies like Ninjacart and SoilCompanion are using these technologies to help farmers earn 2-3x more. This is computer science changing millions of lives in real-time.

💰 India has more coding experts per capita than most Western countries. India hosts platforms like CodeChef, which has over 15 million users worldwide. Indians dominate competitive programming rankings. Companies like Flipkart and Razorpay are building world-class engineering cultures. The talent is real, and if you stick with computer science, you will be part of this story.

Real-World System Design: Swiggy's Architecture

When you order food on Swiggy, here is what happens behind the scenes in about 2 seconds: your location is geocoded (algorithms), nearby restaurants are queried from a spatial index (data structures), menu prices are pulled from a database (SQL), delivery time is estimated using ML models trained on historical data (AI), the order is placed in a distributed message queue (Kafka), a delivery partner is assigned using a matching algorithm (optimization), and real-time tracking begins using WebSocket connections (networking). EVERY concept in your CS curriculum is being used simultaneously to deliver your biryani.

The Process: How Time Series Analysis with Python Works in Production

In professional engineering, implementing time series analysis with python requires a systematic approach that balances correctness, performance, and maintainability:

Step 1: Requirements Analysis and Design Trade-offs
Start with a clear specification: what does this system need to do? What are the performance requirements (latency, throughput)? What about reliability (how often can it fail)? What constraints exist (memory, disk, network)? Engineers create detailed design documents, often including complexity analysis (how does the system scale as data grows?).

Step 2: Architecture and System Design
Design the system architecture: what components exist? How do they communicate? Where are the critical paths? Use design patterns (proven solutions to common problems) to avoid reinventing the wheel. For distributed systems, consider: how do we handle failures? How do we ensure consistency across multiple servers? These questions determine the entire architecture.

Step 3: Implementation with Code Review and Testing
Write the code following the architecture. But here is the thing — it is not a solo activity. Other engineers read and critique the code (code review). They ask: is this maintainable? Are there subtle bugs? Can we optimize this? Meanwhile, automated tests verify every piece of functionality, from unit tests (testing individual functions) to integration tests (testing how components work together).

Step 4: Performance Optimization and Profiling
Measure where the system is slow. Use profilers (tools that measure where time is spent). Optimize the bottlenecks. Sometimes this means algorithmic improvements (choosing a smarter algorithm). Sometimes it means system-level improvements (using caching, adding more servers, optimizing database queries). Always profile before and after to prove the optimization worked.

Step 5: Deployment, Monitoring, and Iteration
Deploy gradually, not all at once. Run A/B tests (comparing two versions) to ensure the new system is better. Once live, monitor relentlessly: metrics dashboards, logs, traces. If issues arise, implement circuit breakers and graceful degradation (keeping the system partially functional rather than crashing completely). Then iterate — version 2.0 will be better than 1.0 based on lessons learned.


How the Web Request Cycle Works

Every time you visit a website, a precise sequence of events occurs. Here is the flow:

    You (Browser)          DNS Server          Web Server
        |                      |                    |
        |---[1] bharath.ai --->|                    |
        |                      |                    |
        |<--[2] IP: 76.76.21.9|                    |
        |                      |                    |
        |---[3] GET /index.html ----------------->  |
        |                      |                    |
        |                      |    [4] Server finds file,
        |                      |        runs server code,
        |                      |        prepares response
        |                      |                    |
        |<---[5] HTTP 200 OK + HTML + CSS + JS --- |
        |                      |                    |
   [6] Browser parses HTML                          |
       Loads CSS (styling)                          |
       Executes JS (interactivity)                  |
       Renders final page                           |

Step 1-2 is DNS resolution — converting a human-readable domain name to a machine-readable IP address. Step 3 is the HTTP request. Step 4 is server-side processing (this is where frameworks like Node.js, Django, or Flask operate). Step 5 is the HTTP response. Step 6 is client-side rendering (this is where React, Angular, or Vue operate).

In a real-world scenario, this cycle also involves CDNs (Content Delivery Networks), load balancers, caching layers, and potentially microservices. Indian companies like Jio use this exact architecture to serve 400+ million subscribers.

Real Story from India

The India Stack Revolution

In the early 1990s, India's economy was closed. Indians could not easily send money abroad or access international services. But starting in 1991, India opened its economy. Young engineers in Bangalore, Hyderabad, and Chennai saw this as an opportunity. They built software companies (Infosys, TCS, Wipro) that served the world.

Fast forward to 2008. India had a problem: 500 million Indians had no formal identity. No bank account, no passport, no way to access government services. The government decided: let us use technology to solve this. UIDAI (Unique Identification Authority of India) was created, and engineers designed Aadhaar.

Aadhaar collects fingerprints and iris scans from every Indian, stores them in massive databases using sophisticated encryption, and allows anyone (even a street vendor) to verify identity instantly. Today, 1.4 billion Indians have Aadhaar. On top of Aadhaar, engineers built UPI (digital payments), Jan Dhan (bank accounts), and ONDC (open e-commerce network).

This entire stack — Aadhaar, UPI, Jan Dhan, ONDC — is called the India Stack. It is considered the most advanced digital infrastructure in the world. Governments and companies everywhere are trying to copy it. And it was built by Indian engineers using computer science concepts that you are learning right now.

Production Engineering: Time Series Analysis with Python at Scale

Understanding time series analysis with python at an academic level is necessary but not sufficient. Let us examine how these concepts manifest in production environments where failure has real consequences.

Consider India's UPI system processing 10+ billion transactions monthly. The architecture must guarantee: atomicity (a transfer either completes fully or not at all — no half-transfers), consistency (balances always add up correctly across all banks), isolation (concurrent transactions on the same account do not interfere), and durability (once confirmed, a transaction survives any failure). These are the ACID properties, and violating any one of them in a payment system would cause financial chaos for millions of people.

At scale, you also face the thundering herd problem: what happens when a million users check their exam results at the same time? (CBSE result day, anyone?) Without rate limiting, connection pooling, caching, and graceful degradation, the system crashes. Good engineering means designing for the worst case while optimising for the common case. Companies like NPCI (the organisation behind UPI) invest heavily in load testing — simulating peak traffic to identify bottlenecks before they affect real users.

Monitoring and observability become critical at scale. You need metrics (how many requests per second? what is the 99th percentile latency?), logs (what happened when something went wrong?), and traces (how did a single request flow through 15 different microservices?). Tools like Prometheus, Grafana, ELK Stack, and Jaeger are standard in Indian tech companies. When Hotstar streams IPL to 50 million concurrent users, their engineering team watches these dashboards in real-time, ready to intervene if any metric goes anomalous.

The career implications are clear: engineers who understand both the theory (from chapters like this one) AND the practice (from building real systems) command the highest salaries and most interesting roles. India's top engineering talent earns ₹50-100+ LPA at companies like Google, Microsoft, and Goldman Sachs, or builds their own startups. The foundation starts here.

Checkpoint: Test Your Understanding 🎯

Before moving forward, ensure you can answer these:

Question 1: Explain the tradeoffs in time series analysis with python. What is better: speed or reliability? Can we have both? Why or why not?

Answer: Good engineers understand that there are always tradeoffs. Optimal depends on requirements — is this a real-time system or batch processing?

Question 2: How would you test if your implementation of time series analysis with python is correct and performant? What would you measure?

Answer: Correctness testing, performance benchmarking, edge case handling, failure scenarios — just like professional engineers do.

Question 3: If time series analysis with python fails in a production system (like UPI), what happens? How would you design to prevent or recover from failures?

Answer: Redundancy, failover systems, circuit breakers, graceful degradation — these are real concerns at scale.

Key Vocabulary

Here are important terms from this chapter that you should know:

Class: An important concept in Data Analysis Techniques
Object: An important concept in Data Analysis Techniques
Inheritance: An important concept in Data Analysis Techniques
Recursion: An important concept in Data Analysis Techniques
Stack: An important concept in Data Analysis Techniques

💡 Interview-Style Problem

Here is a problem that frequently appears in technical interviews at companies like Google, Amazon, and Flipkart: "Design a URL shortener like bit.ly. How would you generate unique short codes? How would you handle millions of redirects per second? What database would you use and why? How would you track click analytics?"

Think about: hash functions for generating short codes, read-heavy workload (99% redirects, 1% creates) suggesting caching, database choice (Redis for cache, PostgreSQL for persistence), and horizontal scaling with consistent hashing. Try sketching the system architecture on paper before looking up solutions. The ability to think through system design problems is the single most valuable skill for senior engineering roles.

Where This Takes You

The knowledge you have gained about time series analysis with python is directly applicable to: competitive programming (Codeforces, CodeChef — India has the 2nd largest competitive programming community globally), open-source contribution (India is the 2nd largest contributor on GitHub), placement preparation (these concepts form 60% of technical interview questions), and building real products (every startup needs engineers who understand these fundamentals).

India's tech ecosystem offers incredible opportunities. Freshers at top companies earn ₹15-50 LPA; experienced engineers at FAANG companies in India earn ₹50-1 Cr+. But more importantly, the problems being solved in India — digital payments for 1.4 billion people, healthcare AI for rural areas, agricultural tech for 150 million farmers — are some of the most impactful engineering challenges in the world. The fundamentals you are building will be the tools you use to tackle them.

Crafted for Class 7–9 • Data Analysis Techniques • Aligned with NEP 2020 & CBSE Curriculum

← Anomaly Detection: Finding the UnusualBuilding a Chatbot with Python →
📱 Share on WhatsApp