Everyone is always asking me how big our ontology is. How many nodes are in your ontology? How many edges do you have? Or the most common — how many terabytes of data do you have in your ontology?
We live in a world where over a decade of attempted human curation, of a semantic web has born very little fruit. It should be quite clear to everyone at this point that this is a job only machines can handle. Yet we are still asking the wrong questions and building the wrong datasets.
The exponential growth of data created on the web has naturally led to a desire to categorize that data. Facts, relationships, entities — that is how those of us who work in the semantic world refer to structuring of data. It’s pretty simple actually. Because we are humans, it happens so quickly in our subconscious…
View original post 2,027 more words