Tim’s Thoughts on Semantic Web in 1999- The reality of it in 2018!

A lot has been discussed about the Semantic Web since Tim Berners-Lee and James Hendler published their paper in Scientific American about the “new form of machine readable content on the web to create new possibilities, especially creation of knowledge.”

Since then, both writers, on various occasions and in numerous presentations and interviews, have been urging people to use linked data and added semantics to their web documents.

But how far did their predictions come?

Is the web extended?

Do we already have a semantic web?

Besides the concept of the Semantic Web, there are some important questions that inspired me to write this post and to organize it the way it is.

To be precise, here is how this article is structured:

Semantic Web Concept

The Technology

Implementations of SW


Google Knowledge Graph - Semantic Web is ON!

I just tried the search, “How many goals did Harry Kane score in 2018 World Cup.”

Probably in 2015 or 2016:

  • I would have had to type the entire query in the search engine
  • I would have had to search for a reliable website amongst at least the 3–4 top results
  • I would have had to skim around the website for the answer to my question and analyze the content or a table presented on the website

But now, it’s as easy as this:

Here are the advantages of the above result:

  • You find the answer in less than a second.
  • You do not have to search other web documents for the information.
  • The information you get is as granular as it gets (no reading of content or checking table headers on some other document).

This is the result of the Google Knowledge Graph, a technology that implements Semantic Web concepts. They probably use Resource Description Framework (RDF) for describing the data, since Google bought Freebase, which was built on RDF, a Semantic Web basic building block.

They go by the slogan “Things not strings” and “Moving from information to knowledge”.

Things not strings and relating those things meaningfully is actually another basic definition of the Semantic Web. So they’re trying to making everything on the web an entity, let it be a place, person, or property, and to connect them.

In this pursuit of making Google a knowledge source, the search giant bought Freebase and transferred its large knowledge base (with over 200 million entities) to the Wikidata project.

The data from these sources and all other websites using schema.org structured data provide the resources (entities).

Google Knowledge Graph connects the dots between those entities.

Take the above example again:

Connecting Dots

  1. Harry Kane is an entity (maybe from Freebase or Wikidata).
  2. FIFA World Cup 2018 is another entity (maybe from Freebase or Wikidata) .
  3. Harry Kane played in the FIFA World Cup 2018—the dots are connected, the information was displayed; more precisely, one instance of the information that can be derived from the combination of these two semantic entities was displayed.
Further Connections

World Cup 2018 as entity has other related entities of list of top scorers(fifa.com or some other website is has structured data allowing google to use it)

Making using of such structured data and connecting the dots google is able to give us instant granular results on a few topics.

Here is what google is say how they are implementing RDF on google knowledge graphs

This is technically semantic web implementation: machine(search engine) processing, analysing and providing you the granular result possible.

Granular, easy to consume results is only of the uses of semantic web we could relate to.

Besides this, semantic web has lot to offer in terms of data interchange, interoperability and reasoning, automating the web efficiently.

How would you like it, if I say you car company can use semantic web to improve their car repairs?

How would you like, if I say semantic web might play a crucial role in some new innovation in gene research.

Integrating information, web services, targeted advertising, data as service clouds,

To find out how, let see what actually is semantic web in the section 2.

What is semantic web

Different Assumptions of Semantic Web

In simple terms : “Semantic web is next stage of web”

However, there are numerous assumptions of what is semantic web.

  • Some consider it as an extension to the web,
  • Some believe it data about data
  • Some consider it as a universal data model
  • Some consider it as a technology enabling machines understand the semantics, so that we do not have to analyse the documents,
  • Some consider it as annotating the web documents.
  • While some other assumptions are about making web of data from web of documents.
However, here is the technical definition from the W3C: “It is the framework with few technologies allowing the process of data reuse and sharing across different applications, organisations and other categorical boundaries.”reasoning, automating the web efficiently.

For all the above approaches to be a possibility, especially as defined in the definition from W3C, for reusing of the data, defining the data and relations between data chunks(resources: websites, things, people, parts of website, organisation or application) is the key.

Web 2.0 to Web 3.0

For a better understanding of what semantic web(web 3.0) is,  lets have look at some similarities and differences between web 2.0 vs web 3.0

Present web or Web 2.0

  1. It is a web of documents
  2. The source of connection between the documents is URLs and hyperlinks
  3. With the advancement of HTML, there are a few rules about structure and visual appeal
  4. Better look and visual appeal is the point of focus, not the structure.
  5. The web we use now is meant for the human readers
  6. Machines cannot read and understand the semantics or context of the text in the documents.
Limitations of Web2.0

As mentioned earlier, Web 2.0 is the collection of interlinked documents.

So, the granular level of the data is a document

What does that mean?

The smallest form of data that a person using the web can get mostly results in the form of documents.

So the whole document has to be studied to get what you are  searching for on the web.

When the search is happening on the internet, mostly, there will be tens of documents creating chaos in the search process.

More than all, the documents and or data on the documents is hardly reused on the web

Bottom line is : You will get what you are looking for on the web but you have to go that extra mile of figuring out the apt and reliable result and reading the whole document to get what you want

Web3.0 or semantic web

Following are the intrinsic points about web 3.0

  1. It is a web of data.
  2. It reuses data
  3. It enables Data integration and data interchange.
  4. Allows us to access the granular data as data(not document)
  5. Defining the relationship between two resources(websites) is one of the key concepts of web 3.0
Unlike web 2.0, semantic web is machine readable and has formalism technologies like RDF schema, OWL, RIF to help machines make decisions.
What is Semantic Web Technically

If you are new to the concepts of semantic web, it is important to have a look at the basic building blocks of semantic web.

So, I started with some common terms related to the technology to ease it down.

I tried to keep the technical details as simple as possible, if you see any issues or doubts there, please write me in comment section.

Some Common Terms Used Semantic Web Technology.

Documents: Most of the information on the web 2.0 is published on the web in human understandable html documents. They are human readable but machines cannot understand them.

Hyperlinks: On these web documents, we often see links like this. By clicking on them, we land on other web resource, let it be a website, web page, post, image, video, URI to some resource or some email.

Basic Building Blocks Of Semantic Web.

Resources: Anything on the web that can be identified as an entity on the web,  named or addressed or handled is a resource. For instance website, image, text,, file or email.

Resources can be identified on the web with an id URI

URI: URI is a resource identifier with a unique string of characters.  Every entity on the web can have an URI.

Meta data : Meta data is data about data. Data that defines what document is about.

Data: The meaning of data in quite opposite to documents in the context of semantic web.

Granular parts of information in a structured format with data types to define what kind of information is call data.

Unstructured Data: Basically unorganised information. The information has to be read to understand. Humans only can understand it. There is no data model for representing the information

Example: Text file or image file on the web.

Structured Data : Information well organised. Both humans and  machines are able to read the data. Information representation uses a data model.

Example: Relational Data base with tables for  dates, social security numbers, addresses. (however, relational database cannot be used as structured data on semantic web because there is no common schema that defines the what field names mean to the web)

Data model: A way in which information is represented. In the semantic web, RDF is the data model.

Semantic Web Layers - How semantic web enables machines read and understand data

Technically, the main goal of semantic web is the creation of Collective intelligence on the web(not one website providing the information for you, but we should able to federate data) in an automated way.

In short a software should be able to use other softwares data easily on the web.

For making the data on web machine readable web needs knowledge representation in a unified way.

This requires:

  1. Unified way of representing the data(technically this requires knowledge representation)
  2. Making information exchange easy irrespective of application boundaries with semantic web database which is adaptive and dynamic in nature. (Semantic web data format)
  3. Naming the relations for machines to understand the semantics (Adding semantics)
  4. People creating open data with semantic web formats for linked data for open knowledge.(Semantic Database)
  5. Automate and Collect knowledge from semantically collected data(Inference rules to automate  the reasoning)

Knowledge Representation:

It is an AI field used to represent data in a way that computers can use that data.

What does that mean?

Let’s assume normal conversation between two individuals on the phone

Usually, the conversations are in sentences.

There is a language of communication which has words, grammar, punctuation and slang.

Words and punctuation are two important symbols that communicates the real message

But often in real world situations, meaning of words and sentences altogether change with the context of the situation too.

That is, more symbols or gestures are used to communicate the message.

For semantic web to really understand what we mean and create agents that can automate tasks for us, they need to understand these symbols which are called semantics in computer and linguistic terminology.

For machines to have such conversation and understand the information,

  1. We need syntax that all computers or machines can understand
  2. The knowledge of the subject in discussion being available for the machines in an understandable format.

Knowledge representation is creating such a unified structured data on a domain so that machines can understand the information.

There are readily available knowledge bases in different domains.

Tim Berner Lee’s idea for semantic web: Importing such existing knowledge representations to the web in semantic web format For collaborating such different knowledge databases,  semantic web needs a language.

  1. to express data and rules for reasoning about data
  2. to export rules of any existing knowledge representation to the web.

RDF (Let it be written in XML or Turtle Serialisation) is one such data model which can be used as the fundamental building block for such language with schema(RDF Schema) and reasoning (OWL) capabilities.More on RDF, RDF Schema, Ontologies, OWL in the following sections

For machines to automate the reasoning with such knowledge representation, the possibility of ambiguity and uncertainty in making decision should be reduced. Knowledge representations use designed formalisms for that.

Semantic nets, Ontologies, frames are such formalisms readily available from AI community.

Semantic web mostly uses Ontologies(Vocabularies) for formalism in reasoning.

In addition to this, they also need rules from domains to further refine the accuracy of the automation.

To explain how to develop semantic databases and how existing knowledge bases can express data and export their rules to the web, let’s have a look at the layer cake, the proposed architecture of semantic web.

Semantic Web Layer Cake

Note: We are going to discuss, important sections of this Layers.

Unicode: Technical encoding of text in any language for the computers.

Characters of all recognized languages are represented in unique numbers of the computers to understand. Unicode is the universal standard for such mapping of characters with numbers representing the characters.

Using one encoding for all the applications, removes the need for conversions of the character encodings.

URI: Uniform resource identifiers are the basic elements that supports the reuse of data on the semantic web. It is a generalized form of URL which helps RDF in bringing the principle of universality. It is also a way to identify a resource on the web.

XML: Like HTML was a designed for unstructured data, XML is developed to represent structured data to be used by other applications by parsing . Data is organised in tree format, with hierarchies.

  1. XML lets everyone create their own tags
  2. You can create tags (hidden labels) such as <college name>, <group name> to annotate the web pages or a few sections of the page.
  3. By doing so, content of these tags can be used by scripts(computer programs for automation. For example, web scrapers, google crawler are at core scripts)

But for any open source script writer to make use of some X websites, XML tags, it is not possible, unless they are aware of the meaning of the tags.

Bottom Line:  XML adds structure (syntax and grammar) to the web documents, but it won’t explain the meaning of the structure.

So for explaining the meaning, they use RDF

RDF : RDF at core is a simple data model. A general purpose data model for data interchange and a metadata language for unifying the data from different sources.

Web practically deals with different types of data. It needs a data model that is able to represent all types of data. RDF is such simple and general purpose data model with least power.

It helps in describing web resources(data about data on websites, pages, people, things, places, institutions) in a graph format where nodes and edges of the graphs are labeled with URI’s

Nodes can be resource or value nodes and URI represents every node.

Resources nodes are connected to value nodes or other resource nodes using edges(Labeled with URI’s too) to specify the relation between the nodes.

Unlike relational databases or XML, RDF is pretty simple model which basically allows anything on the web to connect with each other.  Any term or word can be connected with other terms. (RDF promises Decentralization). The link between terms will have a relationship value.

Precisely in RDF, these statements are triples with subject and object related with predicate.

RDF is Simple:

Thanks to the simplicity, data of other data models can be represented in RDF graph model easily.

For instance a simple RDMS can be exprepressed in RDF

Relational Database Column and RDF Property or relation URI
Relational Database ROW and RDF Node URIRDF Value (URI or just value)

It is at core a general purpose data model on which an expressive language for knowledge representations to collaborate with each other can be developed.  That means, the data stored or sent or queried(using SPARQL) on semantic web is RDF data

The layers of semantic web explained in the later sections will explain how this simple language of triples is made expressive.

In what language is RDF written?

It used to be written in XML traditionally, but in the recent times, there came up more serialisation languages like Turtle.

RDF Schema:

RDF, technically represent the information with conjunction.

As RDF cannot express disjunction and negation, complex inferences cannot be made just with RDF.

Inferencing and reasoning from the given database and when databases combined is one of the key propositions of the semantic web. That is how they plan to enable machines to create more knowledge.

Moreover, RDF on its own is very expressive

For instance, if John is working at a library in LUT university, John is an employee of LUT

Such expressions are possible with RDF(although it is very basic at logic). Repeatable results are not possible with such kind of expressions.

Without restrictions of RDF schema  the results might come up as John is an employee at LUT once and other time as just a librarian at LUT.

When you search for John role at LUT, the result should be accurate and consistent as “ John is employed as librarian at LUT”

Machines need that consistency in the results

RDF Schema can help RDF applications to use limited section or part or category of the language(class) by defining the properties that can be used.

Moreover, the simple and open nature of RDF can create any triple. Agents or machines cannot know if author can be a machine or thing or person. RDF Schema when imported to the model and merged with RDF triples, adds some inferencing and semantics to the basic RDF.

RDFS is thus adds some semantics to RDF by defining classes, subclasses, properties and subproperties. RDFS enriches the description of existing triple statements.

All this is good but although schema can serve as a small ontology for machines to inference the RDF data, but higher level semantics cannot be defined just with Schema. For instance cardinality of properties or relationships, mapping of similar properties cannot be defined with schema. (A detailed example below).

Web Ontology Language(OWL), an extension for RDF, the next layer in the semantic web cake helps in that aspect. It can help in designing ontologies with constraints such as cardinality and datatyping.


OWL was started to help softwares to act autonomously without humans updating the software as in other data formats and models like relational database and UML. It is the next important thing after RDF in the Semantic Web Technologies as it helps in defining unambiguous, complex and interdependent data models that rely on mathematical logic.

What it does?

Owl is a declarative language that is used in expressing ontologies

OWL along with DL usually can help defining the terminology and describing classes, subclasses, relationships in a particular domain and thus create Ontologies.

Basically, it helps in further adding semantics, reasoning of inferred triple statements from Schema.

What is the need for Ontology?

Let’s look at a practical real-world example:

Although we are talking about RDF model, the practical world is full of relational databases.

RDF thrives on using URIs to represent information instead of using words (the process of removing ambiguity and assuring certainty)

For example, let’s consider collecting the address of working people in a particular zip code using the databases of companies in the city.

When it is human doing the task of filtering and collect, we can understand the meaning of the word of database fields. (even if they are named with synonyms)

But with machines, they cannot identify which field in every database is zip code. Some databases have it as address code, some as pin codes and some might be as area codes.

A software agent designed to combine databases and  filter the results from the combination should have an easy way to understand which field in different databases defines zip code

To help such situations, ontologies, the other important and basic component of semantic web is used. Ontology of a domain defines such similarities and relate the terms accordingly.

Ontologies in terms of philosophy is the study of existence, the details of what type of things exists and their theories.

Considering the similarities, web and related fields, named collection of terms and the relations between those terms as ontology.

Practically in web related terms, Ontology is a document or a file that formally defines the relation between terms using taxonomies and set of inference rules.

Taxonomies: Ontologies has class and sub classes of objects(entities) and relationships amongst them(classes and subclasses)

For example we ourselves use categories and sub categories on our blogs.  The categorisation and relation amongst categories helps in managing our content and gives some meaning to the sitemap of the website.

Let’s assume web is one giant website(a directory of all websites)

Such basic classes and subclasses of web entities will become very powerful tool (in terms of management and traversal)

Inference rules in Ontologies

Besides categorisation, the other thing that empowers ontologies is the inference rules. They formally define what category is a subset of other category and what all properties of the mother category are inherited by child category by default.

For example, here is what inference rules in ontology can do

By intuition, we know, a neighbourhood belongs to state, state belongs to a country. Ontologies define such things for machines.

Software agents can use such ontologies to readily suggest us ( with more certainty as well than the information we get on the search engines), for instance the date format associated with a particular neighbourhood (as the agent )

Besides adding better semantics, ontologies provide more advantages.

Accuracy as result

Accuracy is one of the advantages of semantic web we have been discussing in this paper.

Ontology part of the semantic web cake helps in providing that accuracy by removing the ambiguity.

For instance, web pages with a link to a relevant ontology of their niche, is already explaining  the software agents of the web that, this page is about a certain topic.

For the growth of knowledge base: When websites link their pages with ontologies and define their entities with ontology rules, the knowledge on the web page can attributed to the  knowledge base associated with Ontology used.

RIF can be used for those ontologies with complex relations and concepts where OWL is not an efficient idea to handle the complexity.

RIF: After owl, rules are the next important thing.

Different applications on the web use their own rule systems.

In the same way, ontologies of different domains with different rules will be used together on the semantic web.

For instance there are ontologies about auto parts of different car companies. One of those companies might use measurements in inches and other might use measurements in centimetres.

For semantic web to make sense of both in the process of inferencing about auto industry, need a mechanism to translate the rules of one ontology with the other. RIF, as the name specifies, rule interchange format serves this purpose.

For instances there are Rule systems like FLORA and Euler for decision making in different health care systems. When those two systems have to integrate, systems like RIF are essential for data interoperability.

One of the goals on the semantic web is “reasoning of the information on the web”

RIF aims at such reasoning by implementing the seamless rule interchange between rule languages in semantic web

Rules on the semantic web are there generally to enhance expressiveness in ontologies.

SPARQL: Just like SQL is being used to query the relational databases, SPARQL is used to query the RDF based data documents.

Logic Layer:  

It is the layer to unify the logic by using mathematical logic to reconcile all semantics of other parts RDF, RDFS, OWL, SPARQL and RIF into a consistent model.

It would be complicated for applications to follow the rules of all the semantic technologies while integrating with other applications. One unified logic would make it easy for application developers to develop compatible models.


Is the information provided by semantic web is right after all?

There are a few layers which provides new inferences and rules on the semantic web layer cake which will further lead to some conclusions or more new inferences. . Proof layer on the top let the user find out which business rule or inference has led to the conclusions.

All these technologies are developed with an idea of enabling machines understand the context of the webpages to automate things associated with web.

This can happen with intelligent agents and software. agents. Let’s dive into the concept of agents.

The power of semantic web lies in Software agents:

The present value of the web on internet is so high. There a multitude of reasons for that but here are a few reasons that created this immense value

People create websites

People use links to point to other website  in the effort to create a better value.

In the same way for semantic web can become as powerful when people create web agents to collect information from diverse sources, process it and exchange the data and results in between.

What happens when such a web is implemented?

Tim’s Example of Implementation of Semantic Web(Understand semantic with real life example)

Here is a crude example from Tim, on future web would be (This was back in 2001). I think we still are not complete there but we are gradually closing towards it.

Actors: Pete, Lucy,

Objects : Entertainment system, phone

  1. Pete is listening a song on the entertainment system connected to web
  2. Pete also has a phone which is also connected to the internet and received a call while listening.
  3. The phone when picked up sends a message to the music system and reduces the volume of the phone

Seems quite possible and it is a simple straight forward setting.

Let’s have a look at Complex Context of the same example

  1. The call that Pete received was from his sister Lucy from a hospital about their mom
  2. The context of the call is their mom needs to see a physical therapist twice a week

Actors : Pete, Lucy, Doctor, Web Agent, Physical Therapist, doctor

Actions the actors want to do :

3) Lucy wants to book appointments from a therapist for prescribed treatment with the help of her assistant(Semantic web agent)

4) Pete wants to drive their mom to those appointments

Web Agents in Semantic World Analysing and Performing Tasks

1) The web agent looks up for the physical therapists list in the city for the prescribed treatment from the doctor

2) Not all therapists are covered under mom’s insurance plan

3) Not all therapists are close to the moms address

4) Not all therapists have open appointments matching Lucy and Pete’s busy schedules (remember Pete promised to drive mom to therapist)

5) Not all therapists are trusted (Ratings of therapists can help Lucy sort by the best)

Combining all the equations above, the semantic web agent has to come up with a better solution satisfying all the conditions

The result is plan with best of physical therapists lists with better user ratings for the prescribed rating from the real users and are applicable for the timings of Lucy and Pete.

This is how traditional activity of searching and analysing is converted to discovery when machine can understand the semantics

However, the best result was not the perfect result for Pete.

He had to add more preferences considering the traffic problems while returning back from the therapist session for mom

Pete’s Preferences:

  1. The time of return from the therapist location is rush hour
  2. The first part of the search and sort activity is already done by Lucy’s agent
  3. All Pete needed to do was to add more preferences to Lucys result
  4. This is possible with access certificates
  5. Both bots can trust each other give they have valid certificates of each other.
  6. The new result from Pete’s agent is out almost instantly with better location and better timings to escape traffic rush hour problems
  7. But the agent asked Pete to reschedule a few other appointments considering the present search as priority.(access to Pete’s calendar data)

Pete’s agent access to his calendar and other appointments:

1) Pete’s web agent is so smart that it can keep track of all the appointments and even sort them according to priority

Why are few words in the text above italicized?

It probably is easy for the web agents to get a track of subjects and objects like doctors, therapists, Lucy and Pete.

What they cannot understand with the present web technology is the the meaning of those italicised words in the above paragraph.

When the agents can find the meaning and context of those verb words on the web, they can perform the explained automated task(not artificial intelligence at all)

Semantic web is about defining the meaning of those verb words and making them available


Few Interesting Observations from Tim’s Example.

  1. Data Integration

There are different types of data from different applications from different types of platforms. For instances there is appointments data, calendar data, the timing data of the therapist on the web, real time traffic data and average assumptions of the traffic data, and the ratings data on the websites.

All this data should be integrated into one system for the semantic web agents to do sort and figure out better results for all conditions.

  1. Web agent understanding the meaning

Doctors and Therapists are two different actors in the example.

Doctor prescribed a treatment that a therapist can understand.(Real world situation)

This whole meaningful sorting and finding the best therapist was possible because, the machines or agents, in this case were able to understand the semantics of the treatment prescribed and also the list of services provided by each of the therapists.Once the machine can understand the semantics, matching the treatment name with the therapists treatments lists is very easy. (irrespective of what synonyms or alternate wordings are used)

         3.All participating websites or applications optimised their websites for semantic web:

All the therapists websites have ratings.  All those ratings are written(may be tagged in XML tags) in a way that machines can understand. (now we have schema.org for most of the websites.)

  1. Web agents can communicate with each other : Using the safety certificates, these agents can access the data of each other and sorted the results of each other

This is an example of web of data where machines understand the meaning of the data.

Agents that go around from website to website will not only track the websites for keywords, but also will able to understand in a structure, the crucial details of the website. In our case, the appointment times are understood by the agents.

This automated way in which machines does everything from analysing a list of websites, sorting and providing you accurate, granular results is the result of functioning semantic web.

There are a few things on which semantic web flourishes on:

  • Adding semantics to the documents
  • Adding the data existing to the web
  • Figuring out a way for data interchanging and data interoperability for finding new relations between data and thus new answers.
  • Making out computers to analyse the documents to give use granular data as answers in stead of documents where we have to search for the answers.
  • Making use of structured data on the web and thus give machines a chance to understand the data, followed by reasoning and inferencing

Most Frequently Asked Questions

How Semantic Web is Not AI?

Semantic web is not some artificial intelligence magic. It is the process of solving a well defined problem using well defined operations on well defined data.

Machines are not on their own understanding the language. Humans are annotating to help the machines to understand the language in semantic web, unlike in AI


What are the problems that are creating an hindrances for the people to start towards semantic web or adding semantics to the text or information.

One of the biggest hindrances for the web to move towards semantic web is that, they are all used to publish visually beautiful unstructured data (infant visually beautiful documents for people to understand easily.)

There is no global standard for publishing the content or data or information on the web.

Every other website follow their own unstructured way.

For instance, there are hotel websites, that publish their own details and information in a way — Timings, availability, ratings, pricing

For this hotel websites to sync up well with the flights or trains or bus websites, the should be a standard regarding data which all these website should follow for figuring out more possible options(We might not be able to figure out all the possibilities for us to get a better deal in terms of money, time, location and other factors). But if the machines can understand all these data from all possible hotels in a city in prescribed standard, then they can give you more options combining all the equations considering factors like price, time, rating, location and so on.

Now with the current web, there are engines which are biased towards one or the other website and the other form of trusting the websites is using the reviews read on the internet.

If there was a mechanism to use a standard way of representing positive and negative reviews on the public forums, and that data available on the internet for machines to analyse and create new inferences for every query you ask the machine, with the minimum effort  of yours on analysing tons of websites and reading tons of reviews on multiple forums.

Machines can directly get you the best possible solution, considering your parameters as filters(you can choose order of priority for those parameters).

That is web making my work less and bringing more accuracy


I keep hearing Information is free in semantic web. What does that mean?

Sharing data on the web using semantic web data formats will help in allowing your database to linked to other database, so other software or analysis communities can make new inferences from the data

For instance a company doing research on gene mutation can share their data on the internet in linked data standards or semantic web data formats for other companies take leverage of that data in other research.


This is like helping the humanity on the whole – One of the dreams of Tim for web to do.


What is federated data?

Federated data: Data from from different sources can be queried.(different sources, different locations)


How far did the semantic web come?

It definitely did not see that kind of success that Tim Berner lee and James Handler have expected it to take but for sure it came a long way

1) There are many niche related ontologies and knowledge representations

2) All big companies is search engine business have been implementing semantic web layer in the engines

3) Schema.org was a great victory. Still a long way to go. There are only 20% of websites using schema.org

4) Google already started using RDF triples in its knowledge graph for all entity related queries.

However a very few sites that have taken up schema or understood the purpose of semantics

This slow uptake of semantic web by users and communities might be due to a couple of reasons

The complexity in understanding the semantic web

Not enough tools for non-technical people to implement semantic web technologies


What does Semantic Web means for SEO?

Google Only considers SCHEMA for now. So, implementing that already help you

Additionally, If you are having a business-  mark yourself as an entity


What is semantic web in simple terms?

It is a computer language for describing the facts(knowledge) which could enable programmers or semantic web professionals connects facts and ideas that are stored in different places(databases).

This process can help you find the information (with more accuracy and in granular level)

You might be wondering weather search engines are not doing that already?

They are, but they help you find documents which you should read and understand concepts.

But semantic web will already provide you ideas and concepts from the data

Register New Account
Reset Password