Philosophy
Our information and our experiences are increasingly synthesized by machines at an ever increasing rate. Gone are the days where "information is power." Today, when anyone can access information about almost anything, the problem isn't getting information, it is understanding and, increasingly, validating it.
There is discussion around when AI will surpass human intelligence and the ramifications if that were to happen. That discussion misses the point, altogether. Knowledge compounds; it extends previous knowledge. Historically, "knowledge" was understood as information that was collectively known to be true, usually by some collection of "experts." Today, the amount of information available is effectively infinite. Not infinite in the sense that there is so much of it that it seems to have no boundaries, but in the sense that it is generated on demand. Search engines "looked up" information; AI synthesizes it. The biggest problem with AI, today, isn't that it doesn't "know" things, it is that it is difficult to know how "true" the things that it has made up are.
Most people don't intuitively understand that a digital representation of anything is only partially true. It may be entirely true in every aspect that someone cares about but that doesn't make it true in every aspect that someone could care about. A vynyl record captures the entire waveform of a sound. Digital audio "samples" that waveform and relies on the human brain to fill in the gaps when it is played back. The same is true for movies and video. In general, those representations work well enough. The problem becomes clear when, for example, in a race where the finishing positions are very close there is a need to slow down the video to see who actually got to the line first. What you want is to have more images for the same amount of time, but they just aren't there. Today, more and more, those missing images are estimated &emdash; made up &emdash; by a kind of AI. It's very complex and technically sophisticated but it is not exactly true and may not even be that close.
ipoint labs believes that a lot of mistakes are made increasingly because when people find answers they tend to assume that they understood the question in the first place. Communication relies on shortcuts and shared knowledge models. It works well when the model of the receiver is close to what the sender assumed. As the saying goes, "all models are wrong; some models are useful." Humans use various techniques to ascertain what other humans know. These techniques, however, are much less effective with helping to understand what a machine "knows." Digital knowledge requires new tools and techniques.
About us
ipoint labs was formed to provide innovation to what has become known as the Information, Communication and Entertainment (ICE) industries. Long before the advent of the search engine or Generative AI, we believed that technology should be used to augment human communication abilities.