Blogs


Energy Requirements for AI Infrastructure: Current and Future Impacts

The rapid expansion of artificial intelligence (AI), particularly large language models (LLMs) and generative AI, has driven an unprecedented surge in energy demand due to the computational intensity of training and operating these systems. Eric Schmidt, former Google CEO, has highlighted electricity as the primary limiter of AI growth, estimating that the U.S. will require an additional 92 gigawatts (GW) of power—equivalent to the output of 92 nuclear power plants—to sustain the AI revolution. This analysis explores the current energy consumption of major companies’ AI infrastructure, projects future energy needs through 2035, and examines how these demands will reshape the energy sector, drawing on available data from web sources and posts on X.

Current Energy Consumption by Major Companies

Overview

Major tech companies, or “hyperscalers” (e.g., Microsoft, Google, Meta, Amazon, OpenAI), are the primary drivers of AI infrastructure energy demand, operating massive data centers for training and inference of AI models. Training a single state-of-the-art AI model, such as OpenAI’s GPT-4, can consume 50 gigawatt-hours (GWh) of electricity, equivalent to the annual energy use of 4,800 U.S. households. Inference (running AI models for user queries) is also energy-intensive, with a single ChatGPT query requiring approximately 2.9 watt-hours, nearly 10 times that of a Google search (0.3 watt-hours). Below is an overview of key players’ energy footprints based on available data:

Alt text

Read on →

From Text to Tokens: The Complete Guide to Tokenization in LLMs

In the ever-evolving field of artificial intelligence, large language models (LLMs) like GPT-4, Claude, Gemini, and LLaMA have reshaped how machines understand and generate human language. Behind the impressive capabilities of these models lies a deceptively simple but foundational step: tokenization.

In this blog, we will dive deep into the concept of tokenization, understand its types, why it’s needed, the challenges it solves, how it works under the hood, and where it’s headed in the future. This is a one-stop technical deep-dive for anyone looking to fully grasp the backbone of language understanding in LLMs.


What is Tokenization?

At its core, tokenization is the process of converting raw text into smaller units called tokens that a language model can understand and process. These tokens can be:

  • Characters
  • Words
  • Subwords
  • Byte-pair sequences
  • WordPieces
  • SentencePieces
  • Byte-level representations

Each model has its own strategy, depending on design goals like efficiency, vocabulary size, multilingual handling, and memory constraints.

Read on →

Electric Illusion: The Rise and Fall of BluSmart

BluSmart was once a symbol of India’s clean energy aspirations — an all-electric ride-hailing platform backed by marquee investors and government lenders. With its zero-emissions fleet and no-surge pricing model, it quickly gained popularity in cities like Delhi and Bengaluru.

But behind the scenes, the startup’s success story unraveled into one of the most serious corporate fraud cases in India’s startup ecosystem. At the center of this financial maze was Gensol Engineering Ltd, a publicly listed company, controlled by the same promoters behind BluSmart. The ₹262 crore scandal that emerged in 2025 now implicates not just BluSmart, but Gensol’s board, finances, and investors.

Alt textSource: Internet

Read on →

FTX Scandal 2023: Timeline, Facts, and Key Players

In the annals of modern financial history, few names have sparked as much controversy, disbelief, and chaos as Futures Exchange (FTX). Once hailed as a shining star of the cryptocurrency world, FTX’s meteoric rise and catastrophic fall stunned investors, regulators, and the general public alike. By the end of 2023, the scandal surrounding FTX and its founder Sam Bankman-Fried had cemented its place as one of the largest and most complex financial frauds of the 21st century.

This blog dives into the rise and fall of FTX, examining the events that led to its collapse, the financial and human toll it took, and the key takeaways from a debacle that shook the entire crypto industry to its core.

Alt textSource: Internet


The Rise of FTX: From Start-up to Crypto Juggernaut

FTX was founded in 2019 by Sam Bankman-Fried (commonly referred to as SBF), a former Wall Street quant with a background from MIT and a reputation for genius-level intellect. The exchange was created as a more sophisticated platform for cryptocurrency derivatives and quickly attracted traders looking for advanced features, high leverage, and innovative products.

By 2021, FTX had:

  • Raised over \$1.8 billion from prominent investors including Sequoia Capital, SoftBank, and Tiger Global.
  • Claimed over 1 million users and processed billions of dollars in trades daily.
  • Achieved a staggering \$32 billion valuation, making it the third-largest crypto exchange globally.

SBF’s influence extended well beyond the company. He was a frequent guest on financial talk shows, lobbied in Washington, and was dubbed the “JP Morgan of crypto” after bailing out other struggling crypto firms in 2022. But behind the charismatic image and philanthropic posturing was a house of cards waiting to collapse.

Read on →

Smartcase Engine: A Modern Framework for Intelligent Case Management

In today’s dynamic business environment, efficient case management is paramount. Enter Smartcase Engine, an advanced case management framework designed to streamline complex case handling through real-time tracking, efficient workflows, and automated decision-making processes.

What is Smartcase Engine?

Smartcase Engine is a modular, microservices-based platform tailored for managing intricate case workflows. It offers:

  • Real-Time Case Tracking: Monitor cases as they progress through various stages.
  • Efficient Workflows: Automate and optimize the sequence of tasks involved in case resolution.
  • Automated Decision-Making: Leverage predefined rules and AI to make informed decisions without manual intervention.

Alt textSource: Rishijeet Mishra’s Blog

Read on →