CS 53600: Data Communication And Computer Networks
Instructor: Vamsi Addanki
Department: Computer Science, Purdue University
Semester: Spring 2026
Location: STEW 314
Time: TTh 10:30AM - 11:45AM
Credit hours: 3
Teaching assistants: Gustavo Franco Camilo, Jiwon Kim, and Shilong Lei
PSOs: HAAS G050
- Tuesday, 12:30 - 13:20: Gustavo
- Wednesday, 9:30 - 10:20: Jiwon
- Wednesday, 11:30 - 12:20: Shilong
Office hours: Zoom, requires login/authentication using purdue email
- Tuesday, 9:30 - 10:20: Shilong @ Zoom
- Tuesday, 14:30 - 15:30: Gustavo @ Zoom
- Thursday, 9:30 - 10:20: Jiwon @ Zoom
- Schedule a meeting by email, Vamsi @ LWSN 2142J or Zoom
List of Topics
- Introduction and OSI architecture
- Network Latencies, Packet Switching, and Circuit Switching
- Application layer, including HTTP and DNS
- Transport layer: TCP, UDP, and congestion control
- IP layer: Addressing, subnets, NAT, and routing protocols
- Multi Commodity Flows: Max flow and Concurrent flow, ECMP and Load balancing
- MAC layer: Forwarding, and medium access
- Datacenter Networks Introduction
- Network Topologies
- DNN Training and Collective Communication
- Kernel bypass, and RDMA
- Network Buffering
- Datacenter Congestion Control
- Back to circuit switching: Photonic Interconnects
- Wireless Networks (Optional)
- Satellite Networks (Optional)
Textbooks and materials
- Computer Networking: A Top-Down Approach (Primary textbook for this course)
- Network Algorithmics: An Interdisciplinary Approach to Designing Fast Networked Devices (optional)
- Datacenter networking topics will be based on Proceedings of ACM, and IEEE available via corresponding digital libraries. URLs will be posted in the weekly schedule (TBA).
Prerequisites: This course requires background in data structures and algorithms, and programming (c++ and python).
Assignments: Five assignments spread across the semester.
Assignments (Tentative and subject to change)
No lectures or tutorials will be given for the coding and internals of the frameworks involved in the assignments. This is a learning experience for students to explore on their own. LLMs are on your side, but you should understand and know what you are doing ;) We will ask you to explain random pieces of your code during evaluation, primarily “Explain this part of the code” and importantly “Why did you implement the way you did”.
-
For each ip address listed here: https://iperf3serverlist.net/, run a ping test and find the min, max, and average round-trip time (rtt). Find the geolocation of the ip address from publicly available data. Do a scatter plot: distance vs rtt. Explain whether and how distance relates to rtt. What do min and max rtt convey about the distance, and the network state?
-
Write a socket program from scratch using python, that can connect to the iperf3 servers listed in the above website and send data continously. The goal is to measure the throughput i.e., bits/sec that your socket client application is able to send. At regular intervals, extract the tcp socket statistics including snd_cwnd (hint: TCP_INFO_FMT), a corresponding timestamp, and the current throughput bitrate. Now build a machine learning model (or a local or external LLM, or anything you prefer), that learns the relation between cwnd, rtt, packet loss rate, and throughput, at any given time instance. Extract a hand-written algorithm for congestion control (specifically for congestion avoidance phase), based on the insights you get from ML.
- There are two options for the third assignment:
- [With bonus points] Write a kernel module (or eBPF program) implementing your congestion control algorithm. Test the performance using your socket program, and log throughput values for a 10 second duration. Make sure to select your congestion control algorithm in the socket application. Repeat the same with CUBIC, and RENO across multiple runs, and compare the throughput values of all three.
- [No bonus points] Implement your algorithm from assignment 2 in NS-3 within the TCP/IP stack. Evaluate your algorithm in the following setting:
- Create a leaf-spine topology with 2 ToR switches, 2 Spine switches, and 16 servers connected to each ToR. The servers links are 100Gbps, and leaf-spine links are 400Gbps. All links have a propagation delay of 500 nanoseconds.
- From each server, launch one flow towards every other server in the topology. You may use bulksend application in ns3. Set the flowsize to 64MB.
- Log the flow completion times for each flow
- Repeat the same with Cubic, and TcpNewReno
- Plot the flow completion times (avg and 99th percentile), for your algorithm, CUBIC, and RENO.
- Note: Make a choice regarding buffer sizes at each switch. (“Why did you make this design choice?”)
-
Consider a network with $n$ nodes, each with $d$ incoming and outgoing incident links. Each link has a capacity of unit $1$. Find the topology that maximizes the concurrent flow for a uniform all-to-all traffic matrix ($n\times n$ matrix with all ones, except the diagonal). You may consider formulating it as a linear/integer program and ask a solver (e.g., Gurobi), or comeup with a concrete graph and explanation for why this graph maximizes concurrent flow for all-to-all.
- Implement the following algorithms using pytorch gloo backend, and test the algorithms across $n$ machines, each corresponding to one student in the group. You may also need to handle non power-of-two cases depending on your group size. These experiments are easier if the group members are all connected to the same network, preferably a lan.
- Reduce-scatter: Ring, Recursive doubling, and Swing.
- Broadcast: Binary tree, and Binomial tree
- Do one plot for reduce-scatter, and one for broadcast, with x-axis as message sizes (pick a range), and y-axis as the completion time.
- Bonus: Connect all $n$ machines in a ring, using ethernet cables. Make sure to disconnect the machines from any external network. Setup the LAN, IP addresses, routing etc., such that all $n$ machines are able to ping each other, and have ssh access to each other. Now, run the same experiments as above. You need to demonstrate a functional routing setup, along with at least one run in-person in order to get the bonus.
Grading
- Class participation: 5%
- Assignments: 45%
- Midterm exam: 20%
- Final exam: 30%
Late submission
- Grace period: 3 days for the entire semester.
- After the grace period, 25% off for every 24 hours late, rounded up.
AI policy
Use of AI tools (e.g., ChatGPT, Copilot, local LLMs, code generators, auto-completion tools) is allowed to the full extent without restrictions, unless explicitly stated otherwise for a specific assignment.
However:
You are responsible for fully understanding, validating, and being able to explain any code, design choices, or analysis produced using AI tools.
During evaluation, you may be asked to explain any part of your submission, including why it was implemented in a particular way.
Submissions that rely on AI-generated content without clear understanding or correctness may receive reduced credit.
You are required to acknowledge the use of AI tools in a brief statement describing how they were used (e.g., debugging, boilerplate generation, conceptual guidance). There is no penalty for making use of AI tools.
The goal is to use AI as a productivity and learning tool, not as a substitute for understanding.
Academic integrity
We will default to Purdue’s academic policies throughout this course unless stated otherwise. You are responsible for reading the pages linked below and will be held accountable for their contents.
Honor code
By taking this course, you agree to take the Purdue Honors Pledge: “As a boilermaker pursuing academic excellence, I pledge to be honest and true in all that I do. Accountable together - we are Purdue.”
Acknowledgements
The course material is largely based on the book “Computer Networking: A Top-Down Approach” and the material made available by the authors (sometimes verbatim, the same). The materials also build upon the course “Network Protocols and Architectures” offered at TU Berlin, by Prof. Stefan Schmid. The course page, and policies have been borrowed from the previous iterations of this course by Prof. Muhammad Shahbaz.