Artificial intelligence (AI) encompasses many complex, emerging technologies
that once required human input and can now be performed by a computer.
Broadly speaking, AI is a non-human program, model, or computer that
demonstrates a large range of problem-solving and creativity. Computers can
perform advanced functions, which historically were used to
understand and recommend information. With generative AI,
computers can even generate new content.
The acronym AI is often used interchangeably to represent various types of
technologies within the field of artificial intelligence, but AI capabilities
can vary greatly.
Here you’ll find a number of terms and concepts for AI in practice, on the web.
To learn more about machine learning, review the
machine learning glossary.
How does AI work?
Training is the first step for every model, in which machine learning
engineers build an algorithm to give the model specific inputs and
demonstrate the optimal outputs. At large, web developers don’t need to perform
this step, though you may benefit from understanding how a given model was
trained. While it’s possible to
fine-tune a model, your
time is better spent picking the best model for your task.
Inference is the process of a model drawing conclusions based on new data.
The more training a model has in a specific area, the more likely the inference
creates useful and correct output. However, there is no guarantee of perfect
inference, no matter how much training a model received.
For example, Green Light uses an
AI model trained on data from Google Maps to understand traffic patterns. As
more data is received, inference is performed to provide recommendations to
optimize traffic lights
Where is AI performed?
AI training is completed before a model is released. There may be further
training which may lead to new versions of models with more capabilities or
accuracy.
Web developers should be concerned with where AI inference is performed. The
cost of using AI is largely affected by inference. The range of capability for
a single model is also greatly affected.
Client-side AI
While most AI features on the web rely on servers, client-side AI runs
in the user’s browser and performs inference on the user’s device. This offers
lower latency, reduced server-side costs, removed API key requirements,
increased user privacy, and offline access. You can implement client-side AI
that works across browsers with JavaScript libraries, including
Transformers.js,
TensorFlow.js, and
MediaPipe.
It’s possible for a small, optimized client-side model to outperform a
larger server-side counterpart, especially when
optimized for performance. Assess your
use case to determine what solution is right for you.
Server-side AI
Server-side AI encompasses cloud-based AI services. Think Gemini 1.5 Pro
running on a cloud. These models tend to be much larger and more powerful. This
is especially true of large language models.
Hybrid AI
Hybrid AI refers to any solution including both a client and server component.
For example, you could use a client-side model to perform a task and fallback
to a server-side model when the task cannot be completed on the device.
Machine learning (ML)
Machine learning (ML) is the process by which a computer is learns and
performs tasks without explicit programming. Where AI strives to generate
intelligence, ML consists of algorithms to make predictions of data sets.
For example, suppose we wanted to create a website which rates the weather on
any given day. Traditionally, this may be done by one or more meteorologists,
who could create a representation of Earth’s atmosphere and surface, compute and
predict the weather patterns, and determine a rating by comparing the current
data to historical context.
Instead, we could give an ML model an enormous amount of weather data, until the
model learns the mathematical relationship between weather patterns,
historic data, and guidelines on what makes the weather good or bad on any
particular day. In fact, we’ve
built this on the web.
Generative AI and large language models
Generative AI is a form of machine learning that helps users create content
that feels familiar and mimics human creation.
Generative AI uses large language models to organize data and create or modify
text, images, video, and audio, based on supplied context. Generative AI goes
beyond pattern matching and predictions.
A large language model (LLM) has numerous (often
billions) parameters that you can use to perform a wide variety of tasks, such
as generating, classifying, or summarizing text or images.
Chatbots have become incredibly popular tools for people to use generative AI,
including:
These tools can create written prose, code samples, and artwork. They can help
you plan a vacation, soften or professionalize the tone of an email, or classify
different sets of information into categories.
There are endless use cases for developers and for non-developers.
Deep learning
Deep learning (DL) is a class of ML algorithms. One example would be Deep Neural
Networks (DNNs) which attempt to model the way the human brain is believed to
process information.
A deep learning algorithm may be trained to associate certain features in
images with a specific label or category. Once trained, the algorithm can make
predictions that identify that same category in new images. For example,
Google Photos can identify the difference between cats and dogs in a photo.
Natural language processing (NLP)
Natural language processing is a class of ML that focuses on helping computers
comprehend human language, from the rules of any particular language to the
idiosyncrasies, dialect, and slang used by individuals.
Challenges with AI
There are several challenges when building and using AI. The following are just
a few highlights of what you should consider.
Data quality and recency
Large datasets used to train various AI models are often, inherently out-of-date
soon after they’re used. This means when seeking the most recent information,
you may benefit from prompt engineering to enhance
an AI model’s performance on specific tasks and produce better outputs.
Datasets can be incomplete or too small to effectively support some use cases.
It can be useful to try working with multiple tools or customizing the model to
suit your needs.
Concerns with ethics and bias
AI technology is exciting and has a lot of potential. However, ultimately,
computers and algorithms are built by humans, trained on data that may be
collected by humans, and thus is subject to several challenges. For example,
models can learn and amplify human bias and harmful stereotypes, directly
impacting the output. It’s important to approach building AI technology with
bias mitigation as a priority.
There are numerous ethical considerations about copyright
of AI-generated content; who owns the output, especially if it’s heavily
influenced by or directly copied from copyrighted material?
Before generating new content and ideas, consider existing policies on how to
use the material you create.
Security and privacy
Many web developers have said that privacy and
security are their top concerns in using AI tools. This is especially true in
business contexts with strict data requirements, such as governments and
healthcare companies. Exposing user data to more third-parties with cloud APIs
is a concern. It’s important that any data transmission is secure and
continuously monitored.
Client-side AI may be the key to address these use
cases. There’s much more research and development left to do.
Get started with AI on the web
Now that you’re familiar with the many types of artificial intelligence, you can
start to consider how to use existing models to become more productive and build
better websites and web applications.
You could use AI to:
- Build a better autocomplete for your site’s search.
- Detect the presence of common objects, such as humans or pets, with a
smart camera - Address comment spam with a
natural language model. - Improve your productivity by enabling autocomplete for your code.
- Create a WYSIWYG writing experience with suggestions for the next word or
sentence. - Provide a human-friendly explanation of a dataset.
Pre-trained AI models can be a great way to improve our web sites, web apps,
and productivity, without needing a full understanding of how to build the
mathematical models and gather complex datasets which power the most popular AI
tools.
You may find most models meet your needs right away, without further adjustment.
Tuning is the process of taking a model, which has already been trained on a
large dataset, and further training to meet your specific usage needs. There are
a number of techniques to tune a model:
- Reinforcement learning from Human Feedback (RLHF)
is a technique which uses human feedback to improve a model’s alignment with
human preferences and intentions. - Low-Rank Adaption (LoRA)
is a parameter-efficient method for LLMs which reduces the number of trainable
parameters, while maintaining model performance.



