In the enterprise world, it is often mandatory for any web applications to support multi-tenancy. Multi-tenancy is a software architecture and approach where a single instance of a software application serves multiple customers or tenants. Each tenant is a separate entity with its own data, configurations, and user management, but they all share the same underlying software infrastructure. In other words, the software is designed to provide a single application instance that can be customised and configured for different organisations or users.
The multi-tenant application must allow varying static content such as customer’s logo/backgrounds, branding, CMS, 3rd party analytical SDKs in terms of Google Analytics, Freshdesk, Hotjar, etc. For that, it is often a recommended practice to keep all the static assets per customer in a dedicated folder and then have a separate project configurations in Angular CLI as shown below. This way you can build the application for each tenant one at a time, using the matched configurations.
However, there are several drawbacks to the aforementioned approach:
You are rebuilding the same application multiple times just because of different static assets required by each customer.
You are exponentially increasing the time taken by the CI/CD pipeline to build the same application for all the customers over and over, especially when the customers grow.
You are able to run the application for one customer at a time, making it very time-consuming to switch between them locally.
To tackle these problems, I’m proposing a new approach to the multi-tenancy.
Multi-tenancy the modern way
In the new approach, you can build the application only once regardless of customers, keeping the CI/CD pipeline at O(1). You can even access the same application for multiple customers locally at the same time, increasing the productivity of developers at par. Let’s see how.
The major loophole in the existing approach is the static assets. Once you move away the static assets from the build-time and load them at run-time, you can win half the battle. But how do we load the correct static assets needed for a particular customer 🤔? The answer lies in the application url.
Map static assets to Application URL
In order to map the application url to the respective static assets, we could create folders under /assets directory. Each of these directories will have customer’s logo, styles, profile.json, and 3rd party SDKs such as Freshdesk. The profile.json file may contain customer’s name and other metadata related to authentication or feature flags and that’s the first thing the application will fetch.
Here, when you access https://client1.myapp.com in the browser, we can read the hostname via window.location.hostname to find the appropriate static assets folder and query the profile.json file to apply customer specific feature flags and branding at run-time.
Override Application hostname
This is a crucial bit to avoid maintaining static assets under aforementioned localhost directory and instead point elsewhere. For that, we create localhost/override-hostname.js to override the application hostname and include the script tag in index.html.
window.overrideHostname = 'client1.myapp.com';
This allows you to switch between different customers without restarting the application. For example, changing client1.myapp.com to client2.myapp.com, the application would load client2’s static assets at run-time.
Emit styles for all customers
In Angular/Nx CLI, you can compile multiple SASS files without injecting them into index.html.
Then dynamically inject the appropriate styles at bootstrapping as follows:
Requestly is a browser extension that allows us to Intercept & Modify HTTP Requests which is going to play a major role in resolving the 3rd drawback (i.e. able to run the application for one customer at a time) we had earlier. So to resolve it, you must define 3 rules in Requestly:
Now without the above rules, it renders the application for client1 – loading its logo and applying its branding. But with the rules in place, it renders the application for client2 correctly.
You may have used jQuery in your projects before, however, with the advancements in Frontend frameworks, we are sort of forced to forget about the vast ecosystem of jQuery and its plugins. And the main reason is that your framework of choice is not compatible with jQuery or does not have a jQuery friendly API out of the box. So we either look for an alternative library that is written from scratch or a wrapper library on top of jQuery/Javascript plugin written in your framework of choice. However, Svelte is a different beast and it has something called Action API that lets you use/consume jQuery/Javascript plugin without much framework-related overhead.
I had tweeted my wild prediction last time which what I have show-cased in this tutorial video.
Wild Prediction: Action API in @sveltejs will resurrect @jquery plugins ecosystem.
You may have used Angular Material Sidenav component in your projects before, however, it just vanishes when collapsed by default. What if you want the Sidenav to be visible with side links having icons only when collapsed? Off-course, you can have 2 Sidenavs (one with icons + text and another with icons only) and show/hide them conditionally. But by design, Angular Material Sidenav does not let you have 2 Sidenavs in the same position.
So here is how I’ve devised a clever way to do away with the said limitation.
Reader, I’m extremely sorry for using the clickbait-y headline for this post. I did not mean to talk ill about anything since this is not a rant 😷
Angular Flex-Layout library is really useful even today to quickly sprinkle CSS Flexbox or Grid layout declaratively in Angular applications. And it does save time in comparison with writing layout CSS by hand in various Angular components repetitively. So not using it in Angular applications is not an option unless a far better alternative with the same declarative ability is available. So I’m looking for,
Declarative without steep learning curve.
Cost effective over Network.
Customisable enough to be an alter ego of Angular Flex-Layout
Turns out TailwindCSS hits a home run on these front. It is equally declarative for being a utility-first CSS framework packed with all kinds of CSS classes baked in. It reduces the main bundle size in Angular applications by a huge margin – the larger the applications, the better the gain (a sample application I built has seen ~40% dip in main bundle size without gzip). Watch the video for the proof 👇
Demonstration of advantages of TailwindCSS over Angular Flex Layout
If you are convinced by the demo above then you may find this comparison handy while moving away from Angular Flex-Layout in your application too.
Tailwind does not have a support for grid-template-areas so we have to use grid columns and then apply Grid Row Start/End or Grid Column Start/End classes on grid items. For example, xs:grid-cols-2 xs:grid-rows-2
Use Gap classes along with custom classes overriding *tailwind config*
Angular Flex Layout vs Tailwind comparison
Unlike TailwindCSS, Angular Flex-Layout also has Media Observer which enables applications to listen for media query activations. I think, Breakpoint Observer from Angular CDK can compensate for it.
Do let me know in the comment if you agree with the proposed solution or If I miss anything to cover?
🎅 Happy Xmas and Happy New Year 2021 🎅
Update 1
I have tried to integrate tailwind in Angular application using Angular Material and found a few issues with CSS specificity. Meaning in certain cases, the tailwind CSS classes do not get applied as expected and to obviate the issue, I had to apply !important on handful of tailwind CSS classes. The approach was tweeted last time:
Angular is a very advanced and thought-out framework where Angular Components, Directives, Services, and Routes are just the tip of the iceberg. So my intention with WTH-series of posts is to understand something that I do not know yet in Angular (and its brethren) and teach others as well.
Let us start with a simple example, an Angular Injectable Service that we all know and use extensively in any Angular project. This particular example is also very common in Angular projects where we often maintain environment-specific configurations.
// environment.service.ts
import { Injectable } from '@angular/core';
@Injectable({providedIn: 'root'})
export class Environment1 {
production: boolean = false;
}
Here, we annotated a standard ES6 class with @Injectable decorator and forced Angular to provide it in the application root, making it a singleton service i.e. a single instance will be shared across the application. Then, we can use a Typescript constructor shorthand syntax to inject the above service in Component’s constructor as follows.
// app.component.ts
import { Component } from '@angular/core';
import { Environment1 } from './environment.service';
@Component({
selector: 'my-app',
templateUrl: './app.component.html'
})
export class AppComponent {
constructor(private environment1: Environment1) {}
}
But, often such environment-specific configurations are in the form of POJOs and not the ES6 class.
So the Typescript constructor shorthand syntax will not be useful in this case. However, we can naively just store the POJO in a class property and use it in the template.
// app.component.ts
import { Component } from '@angular/core';
import { Environment, Environment2 } from './environment.service';
@Component({
selector: 'my-app'
templateUrl: './app.component.html'
})
export class AppComponent {
environment2: Environment = Environment2;
}
This will work, no doubt!? But, this defeats the whole purpose of Dependency Injection (DI) in Angular which helps us to mock the dependencies seamlessly while testing.
InjectionToken
That’s why Angular provides a mechanism to create an injection token for POJOs to make them injectable.
Creating InjectionToken
Creating an Injection Token is pretty straight-forward. First, describe your injection token and then set the scope with providedIn (just like the Injectable service that we saw earlier) followed by a factory function that will be evaluated upon injecting the generated token in the component.
Here, we are creating an injection token ENVIRONMENT for the Environment2 POJO.
Feel free to remove providedIn in case you do not want a singleton instance of the token.
Injecting InjectionToken
Now that we have the Injection Token available, all we need is inject it in our component. For that, we can use @Inject() decorator which simply injects the token from the currently active injectors.
// app.component.ts
import { Component, Inject } from '@angular/core';
import { Environment } from './environment.service';
import { ENVIRONMENT } from './injection.tokens';
@Component({
selector: 'my-app'
templateUrl: './app.component.html'
})
export class AppComponent {
constructor(@Inject(ENVIRONMENT) private environment2: Environment) {}
}
Additionally, you can also provide the Injection Token in @NgModule and get rid of the providedIn and the factory function while creating a new InjectionToken, if that suits you.
import { NgModule } from '@angular/core';
import { BrowserModule } from '@angular/platform-browser';
import { AppComponent } from './app.component';
import { Environment2 } from './environment.service';
import { ENVIRONMENT } from './injection.tokens';
@NgModule({
imports: [ BrowserModule ],
declarations: [ AppComponent ],
bootstrap: [ AppComponent ],
providers: [{
provide: ENVIRONMENT,
useValue: Environment2
}]
})
export class AppModule { }
A month ago, I wrote a blog post explaining a hacky way to enable tree-shaking in Rails/Webpacker project at Simpl. I would definitely recommend skimming through the previous post if you have not already.
In this post, we will directly jump into a more robust and stable solution. But before that, I want to resurrect my old memories for you that haunted me for months wherein a broken manifest.json was generated during webpack compilation at a random place. This time, after upgrading @rails/webpacker and related webpack plugins, the problem has been escalated beyond repair wherein an incomplete but valid manifest.json was generated randomly having fewer pack entries than expected. So even the generated manifest.json has little chance of succor by the hacky NodeJS fix_manifest.js script I had written to fix the broken JSON last time.
After a bit of googling my way out, I learned that webpack, with multi-compiler configurations, compiles each webpack configuration asynchronously and disorderly. Which is why I was getting an invalid manifest.json earlier.
Imagine two webpack compilations running simultaneously and writing to the same manifest.json at the same time:
Yes, this is the robust and stable solution I came up with. First, you have to override Manifest fileName in every webpack configuration in order to generate a separate Manifest file for each pack such as manifest-0.json,manifest-1.json, and so on. Then, use the same NodeJS script fix_manifest.js with a slight modification to concatenate all the generated files into a final manifest.json which will be accurate (having all the desired entries) and valid (JSON).
For that, we have to modify the existing generateMultiWebpackConfig method (in ./config/webpack/environment.js) in order to remove the existing clutter of disabling/enabling writeToEmit flag in Manifest which we no longer need. Instead, we will create a deep copy of the original webpack configuration and override the Manifest plugin opts for each entry. The deep copying is mandatory so that a unique Manifest fileName can endure for each pack file.
const { environment } = require('@rails/webpacker')
const cloneDeep = require('lodash.clonedeep')
environment.generateMultiWebpackConfig = function(env) {
let webpackConfig = env.toWebpackConfig()
// extract entries to map later in order to generate separate
// webpack configuration for each entry.
// P.S. extremely important step for tree-shaking
let entries = Object.keys(webpackConfig.entry)
// Finally, map over extracted entries to generate a deep copy of
// Webpack configuration for each entry to override Manifest fileName
return entries.map((entryName, i) => {
let deepClonedConfig = cloneDeep(webpackConfig)
deepClonedConfig.plugins.forEach((plugin, j) => {
// A check for Manifest Plugin
if (plugin.opts && plugin.opts.fileName) {
deepClonedConfig.plugins[j].opts.fileName = `manifest-${i}.json`
}
})
return Object.assign(
{},
deepClonedConfig,
{ entry: { [entryName] : webpackConfig.entry[entryName] } }
)
})
}
Finally, we will update the ./config/webpack/fix_manifest.js NodeJS script to concatenate all the generated Manifest files into a single manifest.json file.
Please note that the compilation of a huge number of JS/TS entries takes a lot of time and CPU, hence it is recommended to use this approach only in a Production environment. Additionally, setmax_old_space_size to handle the out-of-memory issue for production compilation as per your need – using 8000MB i.e. 8GB in here.
I really could not think of a better title for this post because it is not just about using an @Input property setter instead of the life-cycle hook ngAfterViewInit. Hence the title is pretty much inspired from RTFM where “Manual” is replaced by “Code”.
It’s about how important it is to read the code.
Just read the code..!
Last month I had published the Angular blog post on NgConf Medium in which I had proposed various ways to use jQuery plugins in Angular. If you have not read it yet, do read it here and comment if any. Unfortunately, I did not get lucky enough to be ngChampions (Kudos to those who become) and hence I have decided to publish the sequel here on my personal blog.
So after publishing the first post, I went on reading the source code for Material Badge component, just casually.
And to my surprise, I noticed 3 profound things:
Structural Directive over Component
It depends on the functionality you want to build into the component. If all you want to do is alter a single DOM then always go for a custom structural directive instead of writing a custom component. Because the custom component mostly introduces its own APIs unnecessarily.
For example, take a look at the app-toolbar-legends component from the last article. Remember, I’m not contradicting myself in this article, however, for this particular jQuery plugin in Angular, we could safely create an Angular Directive rather than having the Angular Component with its own API in terms of class and icon attributes below.
That means we can simplify the usage of the jQuery plugin in Angular by slapping the Angular Directive on the existing markup as follows. There is no need for an extraneous understanding of where class or icon values go in the component template, it’s pretty clear and concise in here. Easy, just slap a directive appToolbarLegends along with the jQuery plugin configurations.
I wanted a unique id attribute for each instance of the toolbar in order to map them to their respective toolbar buttons. I’m still laughing at myself for going above and beyond just to generate a unique ID with 0 dependencies. Finally, StackOverflow came to the rescue 😅
Math.random().toString(36).substr(2, 9)
But while reading the source code for Material Badge component, I found an elegant approach that I wish to frame on the wall someday 😂. This will generate a unique _contentId for each instance of the directive without much fuss.
import { Directive } from '@angular/core';
let nextId = 0;
@Directive({
selector: '[appToolbarLegends]'
})
export class LegendsDirective {
private _contentId: string = `toolbar_${nextId++}`;
}
@Input property setter vs ngAfterViewInit
Before we get into the getter/setter, let’s understand when and why to use ngAfterViewInit. It’s fairly easy to understand — it is a life cycle hook that triggers when the View of the component or the directive attached to is initialized after all of its bindings are evaluated. That means if you are not concerned with querying the DOM or DOM attributes which have interpolation bindings on them, you can simply use Class Setter method as a substitute.
import { Directive, Input } from '@angular/core';
let nextId = 0;
@Directive({
selector: '[appToolbarLegends]'
})
export class LegendsDirective {
private _contentId: string = `toolbar_${nextId++}`;
@Input('appToolbarLegends')
set config(toolbarConfig: object) {
console.log(toolbarConfig); // logs {position: "right"} object
}
}
The Class Setters are called way before ngAfterViewInit or ngOnInit and hence they speed up the directive instantiation, slightly. Also, unlike ngAfterViewInit or ngOnInit , the Class Setters are called every time the new value is about to be set, giving us the benefit of destroying/recreating the plugin with new configurations.
Demo Day
Thanks for coming this far. So the moral of the story is to do read code written by others, does not matter which open source project it is.
It’s been a while that this blog post about Typescript Generics was in my drafts (not sure why) and finally it is out. 🤞But before we get into Generics, we should be aware of what makes Types in Typescript exciting.
Typescript allows us to type check the code so that errors can be caught during compile time, instead of run time. For example, the following code in Javascript may look correct at compile time but will throw an error in a browser or NodeJS environment.
const life = 42;
life = 24; // OK at compile time
In this case, Typescript may infer the type of the variable life based on its value 42 and notify about the error in the editor/terminal. Additionally, you can specify the correct type explicitly if needed:
const life: number = 42;
life = 24; // Throws an error at compile time
Named Types
So there are few primitive types in Typescript such as number, string, object, array, boolean, any, and void (apart from undefined, null, never, and recently unknown). However, these are not enough, especially when we use them together in a large project and we might need a sort of an umbrella type or a custom type for them to hold together and be reusable. Such aliases are called named types which can be used to create custom types. They are classes, interfaces, enums, and type aliases.
For example, we can create a custom type, MyType comprising a few primitive types as follows.
interface MyType {
foo: string;
bar: number;
baz: boolean;
}
But what if we want foo to be either string or object or array!? One way is to copy the existing interface to MyType2 (and so on).
interface MyType2 {
foo: Array<string>;
bar: number;
baz: boolean;
}
The way we pass any random value to a function with the help of a function parameter to make the function reusable, what if we allowed to do the same for MyType as well. With this approach, the duplication of code will not be needed while handling the same set but different types of data. But before we dive into it, let us first understand the problem with more clarity. And to understand it, we can write a cache function to cache some random string values.
Because of the strict type checking, we are forcing the parameter value to be of type String. But what if someone wants to use a numeric value? I’ll give you a hint: what operator in Javascript do we use for a fallback value if the expected value of a variable is Falsy? Correct! we use || a.k.a. logical OR operator. Now imagine for a second that you are the creator of Typescript Language (Sorry, Anders Hejlsberg) and willing to resolve this issue for all developers. So you might go for a similar solution to have a fallback type and after countless hours of brainstorming, end up using a bitwise operator i.e. | in this case (FYI, that thing is called union types in alternate dimensions where Anders Hejlsberg is still the creator of Typescript).
Is not that amazing? Wait, but what if someone wants to cache boolean values or arrays/objects of custom types!? Since the list is never ending, looks like our current solution is not scalable at all. Would not it be great to control these types from outside!? I mean, how about we allow to define placeholder types inside the above implementation and provide real types from the call site instead.
Generic Function
Let us use ValueType (or use any other placeholder or simply T to suit your needs) as a placeholder wherever needed.
We can even pass the custom type parameter MyType to the cache method in order to type check the value for correctness (try changing bar‘s value to be non-numeric and see for yourself).
cache<MyType>("bar", { foo: "foo", bar: 42, baz: true });
This mechanism of parameterizing types is called Generics. This, in fact, is a generic function.
Generic Classes
Similar to the generic function, we can also create generic classes using the same syntax. Here we have created a wrapper class CacheManager to hold the previously defined cache method and the global (<any>window).cacheList variable as a private property cacheList.
Even though the above code is perfect to encourage reusability of CacheManager while caching all types of values, but someday in future, there will be a need to provide varying types of data in MyType‘s properties as well. That exposed us to the original problem of MyType vs MyType2 (from the Named Types section above). To prevent us from duplicating the custom type MyType to accommodate varying types of properties, Typescript allows using generic types, even with interfaces, which makes them Generic Interfaces. In fact, we are not restricted to use only one parameter type below, use as many as needed. Additionally, we can have union types as a fallback to the provided parameter types. This permits us to pass an empty {} object while using the generic interface wherever we desire the default types of values.
interface MyType<FooType, BarType, BazType> {
foo: FooType | string;
bar: BarType | number;
baz: BazType | boolean;
}
new CacheManager<MyType<{},{},{}>>().cache("bar", { foo: "bar", bar: 42, baz: true });
new CacheManager<MyType<number, string, Array<number>>>().cache("bar", { foo: 42, bar: "bar", baz: [0, 1] })
I know that it looks a bit odd to pass {} wherever the default parameter types are used. However, this has been resolved in Typescript 2.3 by allowing us to provide default type arguments so that passing {} in the parameter types will be optional.
Generic Constraints
When we implicitly or explicitly use a type for anything such as the key parameter of the cache method above, we are purposefully constraining the type of key to be of type String. But we are inadvertently allowing any sort of string here. What if we want to constraint the type of key to be the type of one of the interface properties of MyType only!? One naive way to do so is to use String Literal Types for key below and in return get [ts] Argument of type '"qux"' is not assignable to parameter of type '"foo" | "bar" | "baz"'error when the key type qux does not belong to MyType. This works correctly, however, because of the hard-coded string literal types, we can not replace MyType with YourType interface since it has different properties associated with it as follows.
interface YourType {
qux: boolean;
}
class CacheManager<ValueType> {
private cacheList: { [key: string]: ValueType } = {};
cache(key: "foo" | "bar" | "baz", value: ValueType): ValueType {
this.cacheList[key] = value;
return value;
}
}
new CacheManager<MyType<{},{},{}>>().cache("foo", { foo: "bar", bar: 42, baz: true }); // works
new CacheManager<MyType<{},{},{}>>().cache("qux", { foo: "bar", bar: 42, baz: true }); // !works
new CacheManager<YourType>().cache("qux", { qux: true }); // !works but should have worked
To make it work, we’ve to manually update the string literal types used for key each time in the class implementation. However, similar to Classes, Typescript also allows us to extend Generic Types. So that we can get away with the string literal types "foo" | "bar" | "baz" of key with the generic constraint i.e. KeyType extends keyof ValueType as follows. Here, we are forcing the type-checker to validate the type of key to be one of the interface properties provided in CacheManager. This way we can even serve previously mentioned YourType without any error.
class CacheManager<ValueType> {
private cacheList: { [key: string]: ValueType } = {};
cache<KeyType extends keyof ValueType>(key: KeyType, value: ValueType): ValueType {
this.cacheList[key] = value;
return value;
}
}
new CacheManager<MyType<{},{},{}>>().cache("foo", { foo: "bar", bar: 42, baz: true }); // works
new CacheManager<MyType<{},{},{}>>().cache("qux", { foo: "bar", bar: 42, baz: true }); // !works
new CacheManager<YourType>().cache("qux", { qux: true }); // works
Alright, that brings us to the climax of Generics. I hope \0/ Generocks \m/ for you.
If you found this article useful in anyway, feel free to donate me and receive my dilettante painting as a token of appreciation for your donation.
Six months ago, I knew nothing about Machine Learning since I had always viewed Machine Learning as a distant, almost mystical field reserved for mathematicians and data scientists. Then one day, curiosity got the better of me. I wanted to know, how does a machine actually learn?
That simple question sent me down a rabbit hole. I read countless books, watched hours of videos, and wrestled with concepts that stretched my understanding of math, statistics, and algorithms. The journey was humbling. There were moments when I felt utterly lost, and others when a concept finally clicked and lit up my brain like a light bulb.
This book was born out of that experience: the struggle, the joy, and the sheer curiosity that kept me going. It’s written with pure love for understanding how the hell machine learning actually works from the ground up. Note that I’m not an expert in Machine Learning or Artificial Intelligence and by the end of this book, you won’t be one either. That’s not the goal. What this book aims to do is something more fundamental: to help you see a drop from the ocean of ML knowledge through the lens of first principles, much like understanding physics through its equations. By the time you finish, you’ll have the intuition, mathematical grounding, and confidence to explore more advanced algorithms and concepts on your own, but with true understanding. We’ll explore the underlying mathematics, visualise concepts through graphs, derive algorithms step by step, and eventually bring them to life with Python.
The goal is not to rush to the latest buzzwords or neural networks, but to peek under the hood of machine learning to see how equations turn into intelligent behaviour.
As a full-time frontend architect, I’m writing and releasing this book chapter by chapter, at a pace that fits around my day job and life. So, think of this as a journey we’re taking together, one concept at a time.
This article is a work of fiction and intended for fun reading only 😊
From the beginning of human civilisation, we, humans, are trying to replicate the nature artificially, leading up to the advancement in AGI today. As we moved in that direction already in terms of Humanoid Robots, omniscient artificial intelligences, etc, let’s begin with what the human body/intelligence is made up of and compare it to Artificial intelligence of today and tomorrow. This article will be a mere food for thoughts about considering vedas descriptions on human intelligence to create an AGI.
It is said that human body is made up of 84 tatvas. What if we use some of them to create an AGI!?
5 Mahabhutas a.k.a. Great Elements
In Vedas, the concept of Panchatatvas refers to the five fundamental elements that constitute the material universe. These elements—Prithvi (Earth), Apas (Water), Tejas (Fire), Vayu (Air), and Akasha (Space)—are considered the building blocks of creation.
Prithvi (Earth)
Prithvi represents stability, strength, and nourishment. In the body, it corresponds to bones, muscles, and other structural components. Similarly, in Robotics, it represents the hardware and mechanical structure of Robots body, chassis, and framework. Without a solid structure, the robot cannot operate effectively in the physical world. Boston Dynamics’ Atlas, Figure 01, Digit by Agility Robotics, and many more can be considered as proven examples.
Apas (Water)
Apas represents fluidity, adaptability, and connectivity. It is essential for movement, change, and nurturing the human body. Similarly, in Robotics, it symbolises the hydraulics or actuators which provide smooth and controlled movement. Recently, I’ve come across https://clonerobotics.com/android that uses water-powered muscles instead of traditional motors indicating Water or any liquid will probably be used by many other robotics companies.
Agni (Fire)
Agni represents energy, transformation, and the ability to perform actions. It is associated with heat, light, and power. Similarly, in Robotics, it represents the power systems that drive the robot, such as batteries, electrical circuits, and energy sources.
Vayu (Air)
Vayu symbolises movement, life force, and the flow of energy. It is associated with communication and the spread of ideas. Similarly, in Robotics, it represents network connectivity and wireless communication (e.g., Wi-Fi, Bluetooth), which allow robots to communicate with other systems, share data, and perform tasks collaboratively. Air-driven components, such as pneumatic systems, are also used in robotics for lightweight and fast motion.
Akasha (Ether)
Akasha represents the abstract layer of intelligence, connectivity, and awareness. The idea that Ether is a form of consciousness in the human body is a concept rooted in spiritual, metaphysical, and esoteric traditions rather than scientific understanding. I strongly believe that today’s AIs are just mimicking the human thoughts, not consciousness. What If we control the consciousness for robots? Teleoperation in Clone is the closest thing I found in Robotics today.
5Tanmatras a.k.a. Subtle Elements
In Vedas, the five Tanmatras—Shabda (sound), Sparsha (touch), Rupa (sight), Rasa (taste), and Gandha (smell)—are the subtle elements that enable sensory perception. They serve as the foundational building blocks for interactions between the physical body and the external world. When compared to robotics or AI/ML systems, these Tanmatras can be analogised to the sensory inputs and data processing mechanisms that allow machines to perceive and interact with their environment.
Shabda (Sound)
Shabda represents auditory perception through the ears, enables humans to interpret vibrations as language, music, or environmental sounds. Similarly, in Robotics/AI, it represents Microphones and acoustic sensors. Various voice assistants like ChatGPT or Siri converts sound waves into text and meaning. It can soon detect and classify sounds more precisely.
Sparsha (Touch)
Sparsha represents tactile perception through the skin, includes sensations like temperature, pressure, and texture. Similarly, in Robotics/AI, there is haptic feedback. This will be really useful feature for household robots in the near future.
Rupa (Sight)
Rupa represents visual perception through the eyes, involves the ability to detect form, colour, motion, and depth. Similarly, in Robotics/AI, cameras act as robotic eyes to capture images and videos whereas AI algorithms process visual data to detect objects, faces, or scenes.
Rasa (Taste)
Rasa represents gustatory perception through the tongue, involves detecting sweet, sour, salty, and bitter flavours. Similarly, in Robotics/AI,E-tongues detect chemical compositions of substances, mimic human taste for quality control in food and beverage industries.
Gandha (Smell)
Gandha represents olfactory perception through the nose, involves detecting scents and odors from the environment. Similarly, in Robotics/AI,E-noses analyse volatile compounds to detect specific smells, often use gas sensors combined with AI for classification. Today’s robotics do not provide any provision for Rasa and Gandha but who knows we will have them in the near future 🙂
5 Jnanendriyas a.k.a. Five organs of perceptions
In Vedas, the concept of Panch Vishayas refers represent the faculties of perception.
Chakshu (Sight)
Chakshu refers to the visual perception through the eyes, allowing humans to detect light, form, color, depth, and movement. Similarly, in Robotics/AI,Computer Vision is the field of AI dedicated to enabling machines to interpret and understand visual information from the world.
Karna (Ears)
Karna refers to the auditory perception, allowing humans to hear sounds and interpret vibrations through the ears. Similarly, in Robotics/AI,Microphones serve as the robotic equivalent of ears.
Tvak (Skin)
Tvak refers to the sensation of touch, enabling humans to feel pressure, texture, temperature, and pain through the skin. Similarly, in Robotics/AI,Tactile Sensors (force, pressure, or temperature sensors) are used to mimic human touch in robotics. Haptic Feedback can simulate the sense of touch.
Jihva (Tongue)
Jihva refers to the gustatory sense, allowing humans to taste different flavours. Similarly, in Robotics/AI,E-tongue is designed to simulate the human sense of taste using chemical sensors.
Nāsika (Nose)
Nāsika refers to the olfactory sense, enabling humans to detect and interpret smells. Similarly, in Robotics/AI,E-noses use arrays of chemical sensors to detect odours or gases in the environment, mimicking the human sense of smell.
5 Karmaindriyas a.k.a. organs of action
The 5Karmaindriyas (organs of action) are responsible for performing actions in the physical world. When comparing these to AI/ML and robotics, the Karmaindriyas align with the actuators, interfaces, and decision-making systemsthat allow machines to perform actions in the environment.
Vāk (Speech)
Vāk represents the power of speech, communication, and expression through language. Similarly, in Robotics/AI,Text-to-Speech (TTS) systems and Alexa, Siri, and Google Assistant use AI to respond verbally to user queries.
Pāṇi (Hands)
Pāṇi represents the ability to grasp, manipulate, and interact with objects. Similarly, in Robotics/AI,Robotic Arms and Manipulators equipped with actuators and tactile sensors to perform tasks like assembling, painting, or surgery.
Pāda (Feet)
Pāda refers to movement, walking, and locomotion. Similarly, in Robotics/AI,Robots like Boston Dynamics’ Atlas mimic human-like walking and running.
Pāyu is responsible for excretion and removal of waste, maintaining bodily health. Today’s robotics do not have them.
5 Vayus a.k.a. Vital Energies
The five Vayus in Vedic philosophy represent the vital forces or winds that govern physiological and energetic functions in the human body. These Vayus can be conceptually compared to key systems in AI/ML and robotics, which manage various processes to enable perception, action, and coordination.
Prāṇa Vayu (Inward Energy Flow)
Governs inhalation, intake of energy, and the primary life force. It is responsible for vital processes like breathing and sensory input in a human body. Similarly, in Robotics/AI,Lithium-ion batteries or Hydrogen fuel cells might be called so.
Apāna Vayu (Downward Energy Flow)
Governs elimination, excretion, and removal of waste. It facilitates processes that purge what is no longer needed. Today’s robotics do not have them since they do not excrete.
Samāna Vayu (Balancing Energy Flow)
Governs digestion, assimilation, and distribution of energy throughout the body. It ensures equilibrium. Feedback loops and control systems that monitor and adjust robotic behaviour might be called so.
Udāna Vayu (Upward Energy Flow)
Governs upward movement, expression, and communication. It is associated with speech, self-expression, and energy that lifts upward. Today’s Robots or Voice Assistants use speech synthesis systems or Text To Speech technology but who knows if futuristic robots may use human-like vocal cord.
Vyāna Vayu (Outward Energy Flow)
Governs circulation and coordination, ensuring the distribution of energy throughout the body and connection between all parts. Cooling Systems, Battery Management Systems, Power Management Units, etc may be called so.
Antahkarana Panchak
The Antahkaran Panchak (five aspects of the internal instrument) are the subtle faculties of the human mind that govern perception, decision-making, memory, and action.
Manas (Mind)
This aspect is responsible for processing sensory input, forming thoughts, and making choices. It is reactive, emotional, and deliberative.
Buddhi (Intellect)
The higher reasoning faculty that discerns truth from falsehood, good from bad, and right from wrong. It represents wisdom and clarity.
Chitta (Subconscious Mind)
Stores and recalls past experiences. Shapes desires and tendencies through accumulated impressions in humans.
Ahamkara (Ego)
Differentiates the self from others. Drives attachment and desire.
I do not fully agree with Manas as context vector because Manas is way more complex to explain. But since every expert has been promoting agents as next big thing in 2025 and onwards, it’s time to take a look at the speech.
In traditional Vedantic philosophy, a speech in its various forms, often linking it to the mind’s purity, intention, and self-awareness, which aligns with the progression from the subtle (Para) to the gross (Vaikhari).
Para (परा)
Para is implicitly connected to the essence of divine consciousness or the source of all thought and speech. It often means Param-artha (the supreme truth) and the importance of aligning one’s speech and thoughts withultimate reality. Similarly, pre-training LLMs on truest dataset may be called so so that they do not produce contorted facts or utter lies.
Pashyanti (पश्यंती)
Pashyanti refers to the visionary and intuitive aspect of speech, which is closer to the subtle impressions of truth in the mind. Similarly, LLM guardrails to prevent answering unsafe user queries may come closer in comparison.
Madhyama (मध्यमां)
Madhyama corresponds to the internal mental dialogue and preparation of thoughts before they are expressed. Similarly, LLM generates the probability distribution of all possible next words.
Vaikhari (वैखरी)
Vaikhari represents the spoken word which must uplift others and align with dharma (rightness). Similarly, LLM’s multi-modal output may be called so.
Trigunas (3 fundamental qualities)
In traditional Vedantic philosophy, trigunas are the fundamental qualities or modes of nature that govern human behaviour, the mind, and the material world.
Sattva (Purity and Harmony)
Sattva represents knowledge, purity, goodness, balance, and serenity. It is associated with the pursuit of truth, selflessness, and spiritual wisdom. Similarly, household or industrial robots and AI agents helping humans with day-to-day activity to be more productive can have Sattva Guna in them.
Rajas (Desire)
Rajas represents passion, desire, ambition, action, and restlessness. It fuels ego-driven actions and desires. Similarly, a hacked household or industrial robots and unethical AI agents duping humans can have Rajas Guna in them.
Tamas (Inertia and Darkness)
Tamas represents ignorance, inertia, lethargy, delusion, and destruction. Similarly, bad AI agents or Skynet spreading fake propaganda and maligning someone with harmful output can have Tamas Guna in them.
It will be interesting to see which of these qualities will be added to Robotics/AI systems in 2025 and beyond. Happy New Year and thanks for reading 😊.