Bio

I am a tenure-track (assistant) professor at Purdue University. My research lies at the intersection of systems and theory, with a current focus on programmable photonic interconnects, scale-up GPU networks, and learning-augmented network algorithms. I lead the Systems and Theory Group for Intelligent and Adaptive Networks (STyGIANet).

Prior to joining Purdue in fall 2025, I completed my PhD at TU Berlin under the supervision of Prof. Stefan Schmid. I obtained MSc in Computer Science (Specialization in Networks) at Sorbonne University in Paris, France in 2020. I did my Master’s thesis at ETH Zurich under the supervision of Prof. Laurent Vanbever. From 2017 to 2019, I worked as a research engineer at Telecom Paris under the supervision of Prof. Dario Rossi and Prof. Luigi Iannone. I completed Bachelor of Engineering (B.E) studies in 2017 at Birla Institute of Technology and Science, Goa Campus, India.

Research Interests: I am broadly interested in network topologies; congestion control and network flow problems; online algorithms and competitive analysis; buffer requirements and management; and related areas. My interests are mainly at the intersection of networking theory and systems.

Other Interests: My favorite music


Recent Publications

A complete list of publications can be found here.

  1. SOSA ’26
    The Harmonic Policy for Online Buffer Sharing is (2 + ln)-Competitive: A Simple Proof
    Vamsi Addanki, Julien Dallot, Leon Kellerhals, Maciej Pacut, and Stefan Schmid.
    Proceedings of the 2026 Symposium on Simplicity in Algorithms (SOSA), Vancouver, Canada, 2026.
    PDF  Slides  Abstract  BibTeX
    Click here to close the dropdown!
    The problem of online buffer sharing is expressed as follows. A switch with n output ports receives a stream of incoming packets. When an incoming packet is accepted by the switch, it is stored in a shared buffer of capacity B common to all packets and awaits its transmission through its corresponding output port determined by its destination. Each output port transmits one packet per time unit. The problem is to find an algorithm for the switch to accept or reject a packet upon its arrival in order to maximize the total number of transmitted packets.
         Building on the work of Kesselman et al. (STOC 2001) on split buffer sharing, Kesselman and Mansour (TCS 2004) considered the problem of online buffer sharing which models most deployed internet switches. In their work, they presented the Harmonic policy and proved that it is (2 + ln n)-competitive, which is the best known competitive ratio for this problem. The Harmonic policy unfortunately saw less practical relevance as it relies on sorting that is deemed costly in practice, especially on network switches processing multiple terabits of packets per second. While the Harmonic policy is elegant, the original proof is also rather complex and involves a lengthy matching routine along with multiple intermediary results.
         This note presents a simplified Harmonic policy, both in terms of implementation and proof. First, we show that the Harmonic policy can be implemented with a constant number of threshold checks per packet, matching the widely deployed Dynamic Threshold policy. Second, we present a simple proof that shows the Harmonic policy is (2 + ln n)-competitive. In contrast to the original proof, the current proof is direct and relies on a 3-partitioning of the packets.
    Click here to close the dropdown!
    @inproceedings{buffersosa2026,
      author = {Addanki, Vamsi and Dallot, Julien and Kellerhals, Leon and Pacut, Maciej and Schmid, Stefan},
      title = {The Harmonic Policy for Online Buffer Sharing is $(2 + ln)$-Competitive: A Simple Proof},
      year = {2026},
      booktitle = {Proceedings of the 2026 Symposium on Simplicity in Algorithms (SOSA)}
    }
    
  2. HotNets ’25
    When Light Bends to the Collective Will: A Theory and Vision for Adaptive Photonic Scale-up Domains
    Vamsi Addanki.
    Proceedings of the 24th ACM Workshop on Hot Topics in Networks, College Park, Maryland, USA, 2025.
    PDF  Slides  Abstract  BibTeX
    Click here to close the dropdown!
    As chip-to-chip silicon photonics gain traction for their bandwidth and energy efficiency, collective communication has emerged as a critical bottleneck in scale-up systems. Programmable photonic interconnects offer a promising path forward: by dynamically reconfiguring the fabric, they can establish direct, high-bandwidth optical paths between communicating endpoints — synchronously and guided by the structure of collective operations (e.g., AllReduce). However, realizing this vision — when light bends to the collective will — requires navigating a fundamental trade-off between reconfiguration delay and the performance gains of adaptive topologies.
         In this paper, we present a simple theoretical framework for adaptive photonic scale-up domains that makes this trade-off explicit and clarifies when reconfiguration is worthwhile. Along the way, we highlight a connection — not surprising but still powerful — between the Birkhoff–von Neumann (BvN) decomposition, maximum concurrent flow (a classic measure of network throughput), and the well-known α–βcost model for collectives. Finally, we outline a research agenda in algorithm design and systems integration that can build on this foundation.
    Click here to close the dropdown!
    @inproceedings{adaptivephotonicshotnets25,
      author = {Addanki, Vamsi},
      title = {When Light Bends to the Collective Will: A Theory and Vision for Adaptive Photonic Scale-up Domains},
      year = {2025},
      booktitle = {Proceedings of the 24th ACM Workshop on Hot Topics in Networks}
    }
    
  3. LEO-NET ’24
    Starlink Performance through the Edge Router Lens
    Sarah-Michelle Hammer, Vamsi Addanki, Max Franke, and Stefan Schmid.
    Proceedings of the 2nd ACM Workshop on LEO Networking and Communication 2024, Washington, DC., USA, 2024.
    PDF  Abstract  BibTeX
    Click here to close the dropdown!
    Low-Earth Orbit satellite-based Internet has become commercially available to end users, with Starlink being the most prominent provider. Starlink has been shown to exhibit a periodic pattern with a characteristic throughput drop on the boundaries of 15s intervals, in addition to multiple bands of latency within each 15s interval. A multitude of prior works hypothesize various root causes for this pattern, such as reordering and packet loss. Some works have attributed these effects to the edge router, advocating for explicit feedback to the transport layer. However, with the edge router being a proprietary Starlink device, it raises questions about the extent of its influence on periodic throughput drops, losses, and jitter, leaving us to wonder if we fully understand the underlying issues.
         This paper presents the first measurement study with a vantage point that is by far the closest (last hop) to the core Starlink network. We use a Generation 1 dish, which allows us to bypass the proprietary Starlink router and connect a Linux server directly to the dish. We investigate the impact of the edge router on the observed periodic pattern in Starlink performance. Our results are primarily negative in terms of any significant buffer buildup and packet losses at the edge router, suggesting that the causality of the observed patterns lies entirely in the core network, a proprietary space that cannot be fixed by the end user. Interestingly, we observe similar patterns even with a constant bitrate UDP sender, likely indicating that the periodic drop in throughput is not an inherent limitation of existing TCP implementations but rather the core network characteristic!
    Click here to close the dropdown!
    @inproceedings{starlinkleonet24,
      author = {Hammer, Sarah-Michelle and Addanki, Vamsi and Franke, Max and Schmid, Stefan},
      title = {Starlink Performance through the Edge Router Lens},
      booktitle = {Proceedings of the 2nd ACM Workshop on LEO Networking and Communication 2024},
      year = {2024},
      address = {Washington, DC., USA}
    }
    
  4. NSDI ’24
    Credence: Augmenting Datacenter Switch Buffer Sharing with ML Predictions
    Vamsi Addanki, Maciej Pacut, and Stefan Schmid.
    21st USENIX Symposium on Networked Systems Design and Implementation (NSDI 24), Santa Clara, CA, 2024.
    PDF  Slides  Video  Code  Abstract  BibTeX
    Click here to close the dropdown!
    Packet buffers in datacenter switches are shared across all the switch ports in order to improve the overall throughput. The trend of shrinking buffer sizes in datacenter switches makes buffer sharing extremely challenging and a critical performance issue. Literature suggests that push-out buffer sharing algorithms have significantly better performance guarantees compared to drop-tail algorithms. Unfortunately, switches are unable to benefit from these algorithms due to lack of support for push-out operations in hardware. Our key observation is that drop-tail buffers can emulate push-out buffers if the future packet arrivals are known ahead of time. Recent advancements in traffic predictions pave us a way towards better buffer sharing algorithms that leverage predictions.
         This paper is the first research attempt in this direction. We propose Credence, a drop-tail buffer sharing algorithm augmented with machine-learned predictions. Credence can unlock the performance only attainable by push-out algorithms so far. Its performance hinges on the accuracy of predictions. Specifically, Credence achieves near-optimal performance of the best known push-out algorithm LQD (Longest Queue Drop) with perfect predictions, but gracefully degrades to the performance of the simplest drop-tail algorithm Complete Sharing when the prediction error gets arbitrarily worse. Our evaluations show that Credence improves throughput by 1.5x compared to traditional approaches. In terms of flow completion times, we show that Credence improves upon the state-of-the-art approaches by up to 95% using off-the-shelf machine learning techniques that are also practical in today’s hardware. We believe this work opens several interesting future work opportunities both in systems and theory that we discuss at the end of this paper.
    Click here to close the dropdown!
    @inproceedings{credencensdi24,
      author = {Addanki, Vamsi and Pacut, Maciej and Schmid, Stefan},
      title = {Credence: Augmenting Datacenter Switch Buffer Sharing with ML Predictions},
      booktitle = {21st USENIX Symposium on Networked Systems Design and Implementation (NSDI 24)},
      year = {2024},
      address = {Santa Clara, CA},
      url = {https://www.usenix.org/conference/nsdi24/presentation/addanki-credence},
      publisher = {USENIX Association}
    }
    
  5. NSDI ’24
    Reverie: Low Pass Filter-Based Switch Buffer Sharing for Datacenters with RDMA and TCP Traffic
    Vamsi Addanki, Wei Bai, Stefan Schmid, and Maria Apostolaki.
    21st USENIX Symposium on Networked Systems Design and Implementation (NSDI 24), Santa Clara, CA, 2024.
    PDF  Slides  Video  Code  Abstract  BibTeX
    Click here to close the dropdown!
    The switch buffers in datacenters today are shared by traffic classes with different loss tolerance and reaction to congestion signals. In particular, while legacy applications use loss-tolerant transport, e.g., DCTCP, newer applications require lossless datacenter transport, e.g., RDMA over Converged Ethernet. The allocation of buffers for this diverse traffic mix is managed by a buffer-sharing scheme. Unfortunately, as we analytically show in this paper, the buffer-sharing practices of today’s datacenters pose a fundamental limitation to effectively isolate RDMA and TCP while also maximizing burst absorption. We identify two root causes: (i) the buffer-sharing for RDMA and TCP relies on two independent and often conflicting views of the buffer, namely ingress and egress; and (ii) the buffer-sharing scheme micromanages the buffer and overreacts to the changes in its occupancy during transient congestion.
         In this paper, we present Reverie, a buffer-sharing scheme, which, unlike prior works, is suitable for both lossless and loss-tolerant traffic classes, providing isolation as well as superior burst absorption. At the core of Reverie lies a unified (consolidated ingress and egress) admission control that jointly optimizes the buffers for both traffic classes. Reverie, allocates buffer based on a low-pass filter that naturally absorbs bursty queue lengths during transient congestion within the buffer limits. Our evaluation shows that Reverie can improve the performance of RDMA as well as TCP in terms of flow completion times by up to 33%.
    Click here to close the dropdown!
    @inproceedings{reveriensdi24,
      author = {Addanki, Vamsi and Bai, Wei and Schmid, Stefan and Apostolaki, Maria},
      title = {Reverie: Low Pass Filter-Based Switch Buffer Sharing for Datacenters with RDMA and TCP Traffic},
      booktitle = {21st USENIX Symposium on Networked Systems Design and Implementation (NSDI 24)},
      year = {2024},
      address = {Santa Clara, CA},
      url = {https://www.usenix.org/conference/nsdi24/presentation/addanki-reverie},
      publisher = {USENIX Association}
    }
    
  6. SIGMETRICS ’23
    Mars: Near-Optimal Throughput with Shallow Buffers in Reconfigurable Datacenter Networks
    Vamsi Addanki, Chen Avin, and Stefan Schmid.
    Proc. ACM Meas. Anal. Comput. Syst, ACM SIGMETRICS, Orlando, Florida, USA, 2023.
    PDF  Slides  Abstract  BibTeX
    Click here to close the dropdown!
    The performance of large-scale computing systems often critically depends on high-performance communication networks. Dynamically reconfigurable topologies, e.g., based on optical circuit switches, are emerging as an innovative new technology to deal with the explosive growth of datacenter traffic. Specifically, periodic reconfigurable datacenter networks (RDCNs) such as RotorNet (SIGCOMM 2017), Opera (NSDI 2020) and Sirius (SIGCOMM 2020) have been shown to provide high throughput, by emulating a complete graph through fast periodic circuit switch scheduling.
         However, to achieve such a high throughput, existing reconfigurable network designs pay a high price: in terms of potentially high delays, but also, as we show as a first contribution in this paper, in terms of the high buffer requirements. In particular, we show that under buffer constraints, emulating the high-throughput complete-graph is infeasible at scale, and we uncover a spectrum of unvisited and attractive alternative RDCNs, which emulate regular graphs of lower node degree.
         We present Mars, a periodic reconfigurable topology which emulates a d-regular graph with near-optimal throughput. In particular, we systematically analyze how the degree d can be optimized for throughput given the available buffer and delay tolerance of the datacenter. We further show empirically that Mars achieves higher throughput compared to existing systems when buffer sizes are bounded.
    Click here to close the dropdown!
    @inproceedings{mars23,
      author = {Addanki, Vamsi and Avin, Chen and Schmid, Stefan},
      title = {Mars: Near-Optimal Throughput with Shallow Buffers in Reconfigurable Datacenter Networks},
      year = {2023},
      booktitle = {Proc. ACM Meas. Anal. Comput. Syst, ACM SIGMETRICS},
      url = {https://doi.org/10.1145/3579312}
    }
    
  7. SIGCOMM ’22
    ABM: Active Buffer Management in Datacenters
    Vamsi Addanki, Maria Apostolaki, Manya Ghobadi, Stefan Schmid, and Laurent Vanbever.
    Proceedings of the ACM SIGCOMM 2022 Conference, Amsterdam, Netherlands, 2022.
    PDF  Slides  Video  Code  Abstract  BibTeX
    Click here to close the dropdown!
    Today’s network devices share buffer across queues to avoid drops during transient congestion and absorb bursts. As the buffer-per-bandwidth-unit in datacenter decreases, the need for optimal buffer utilization becomes more pressing. Typical devices use a hierarchical packet admission control scheme: First, a Buffer Management (BM) scheme decides the maximum length per queue at the device level and then an Active Queue Management (AQM) scheme decides which packets will be admitted at the queue level. Unfortunately, the lack of cooperation between the two control schemes leads to (i) harmful interference across queues, due to the lack of isolation; (ii) increased queueing delay, due to the obliviousness to the per-queue drain time; and (iii) thus unpredictable burst tolerance. To overcome these limitations, we propose ABM, Active Buffer Management which incorporates insights from both BM and AQM. Concretely, ABM accounts for both total buffer occupancy (typically used by BM) and queue drain time (typically used by AQM). We analytically prove that ABM provides isolation, bounded buffer drain time and achieves predictable burst tolerance without sacrificing throughput. We empirically find that ABM improves the 99th percentile FCT for short flows by up to 94% compared to the state-of-the-art buffer management. We further show that ABM improves the performance of advanced datacenter transport protocols in terms of FCT by up to 76% compared to DCTCP, TIMELY and PowerTCP under bursty workloads even at moderate load conditions.
    Click here to close the dropdown!
    @inproceedings{abm,
      author = {Addanki, Vamsi and Apostolaki, Maria and Ghobadi, Manya and Schmid, Stefan and Vanbever, Laurent},
      title = {ABM: Active Buffer Management in Datacenters},
      year = {2022},
      booktitle = {Proceedings of the ACM SIGCOMM 2022 Conference}
    }
    
  8. NSDI ’22
    PowerTCP: Pushing the Performance Limits of Datacenter Networks
    Vamsi Addanki, Oliver Michel, and Stefan Schmid.
    19th USENIX Symposium on Networked Systems Design and Implementation (NSDI 22), Renton, WA, 2022.
    PDF  Slides  Video  Website  Code  Abstract  BibTeX
    Click here to close the dropdown!
    Increasingly stringent throughput and latency requirements in datacenter networks demand fast and accurate congestion control. We observe that the reaction time and accuracy of existing datacenter congestion control schemes are inherently limited. They either rely only on explicit feedback about the network state (e.g., queue lengths in DCTCP) or only on variations of state (e.g., RTT gradient in TIMELY). To overcome these limitations, we propose a novel congestion control algorithm, PowerTCP, which achieves much more fine-grained congestion control by adapting to the bandwidth-window product (henceforth called power). PowerTCP leverages in-band network telemetry to react to changes in the network instantaneously without loss of throughput and while keeping queues short. Due to its fast reaction time, our algorithm is particularly well-suited for dynamic network environments and bursty traffic patterns. We show analytically and empirically that PowerTCP can significantly outperform the state-of-the-art in both traditional datacenter topologies and emerging reconfigurable datacenters where frequent bandwidth changes make congestion control challenging. In traditional datacenter networks, PowerTCP reduces tail flow completion times of short flows by 80% compared to DCQCN and TIMELY, and by 33% compared to HPCC even at 60% network load. In reconfigurable datacenters, PowerTCP achieves 85% circuit utilization without incurring additional latency and cuts tail latency by at least 2x compared to existing approaches.
    Click here to close the dropdown!
    @inproceedings{nsdi22,
      author = {Addanki, Vamsi and Michel, Oliver and Schmid, Stefan},
      title = {{PowerTCP}: Pushing the Performance Limits of Datacenter Networks},
      booktitle = {19th USENIX Symposium on Networked Systems Design and Implementation (NSDI 22)},
      year = {2022},
      address = {Renton, WA},
      url = {https://www.usenix.org/conference/nsdi22/presentation/addanki},
      publisher = {USENIX Association}
    }