“I know everything there is to know about technology.”
This statement is of course, utter bunk.
In an age where new technologies can be put together as easily as a Word document, no single person can plausibly understand enough to make this statement. The limitation is not intellect; the problem is one of time and complexity. That is because today (whatever day it is) there will be tens of millions of developers around the world working on open source, modular designs, customization, modification and new innovations.
The problem for information security professionals and others working within technology is that there is an expectation, specifically by people who do not work directly in technology, that having expertise in any slice of the tech sector means you are required to know everything. After all, you are the tech expert, right?
It might be about blockchain, AI, smart devices, cloud, … “What can you tell me about this particular technology I am interested in? It’s called [insert name you never heard of here]. Have you heard of it?”
How do you answer these questions, without losing the confidence of whoever asked it?
This is an article about how I do it (or at least how I do it if I am being paid!!)
How you choose to do this will probably vary based on your particular job role.
Technologies fall into broad categories:
- Cloud
- Mobile
- Edge
- Artificial Intelligence (and sometimes basic machine learning with an AI label!!)
- Blockchain
- “Traditional” (server/desktop/OS)
- Smart devices (also sometimes known as IoT or Internet of Things)
- …
I always aim to learn and sustain up-to-date knowledge about the fundamentals in each category. I also keep an eye out for new and emerging categories. For example, edge was not really much of a thing five years ago.
Sustaining this knowledge was not always an easy task in the past. I had to set time aside to personally research each one (for example, so that I could put definitions into books on the topic). Things are easier now because of the resources available. For example, ISACA has resources and even a certification on the fundamentals of emerging technologies.
Understanding the fundamentals provides me with a good jumping-off point. That means that anytime I am asked what I know about technology X – I can quickly find out what broad technology category it fits into and am able to tell the person:
“I haven’t heard of technology X before but I can tell you the fundamentals about the categories of technologies it fits into – and if required I can research technology X to understand its particular features.”
As an example, let’s look at some of the basics of AI:
Artificial Intelligence
Things that are labeled as AI are not always as intelligent as they make out.
AI has progressed a long way in a short time (more on that below). However, AI is also an umbrella term for a set of very different components – some of which can be little more than standard software program, some that are well on their way to emulating the human brain, and many shades that are somewhere in between.
Although all AI incorporates machine learning, very basic machine learning can just refer to a very limited computer algorithm running on a limited dataset to try to enhance how that program can handle or process that data or data that is similar to it.
In a nutshell – machine learning is a component in AI – but it is not (by itself) typically considered to be AI – at least not by the standards of today.
You might have noticed that since around 2012, there has been a significant increase in how AI programs operate. For example, NLP (natural language processing) via software can now understand what is being said and respond to it with a level of accuracy that was impossible just a few years ago.
The primary reason for this is that there were some major progressions in the field of AI that helped move from a high reliance on programmers setting parameters, to systems where AI could form more layers within artificial neural networks (ANNs). This advance meant that cutting-edge AI could begin to make more programming decisions for itself because some AI could now abstract and process information in ways that were much closer to how the human brain operates.
Instead of the single term “AI,” there are now several sub-types of AI – and also several different models on which it can work. For example – a limited memory AI has the ability to recall recent events so that it can learn from observation and experience. Conversely a reactive AI has no ability to form or learn from such memories. Some AIs look to emulate how the human brain works (theory of mind AI) …
Cutting-edge AI programs can develop such a sophisticated understanding of their particular field (for example medical research) that the owners of the technology benefit from the output and outcomes but lack the ability to understand the vast amount of analytical computation that was used by the AI to arrive at the result.
Most AI programs are (at present) nowhere near as sophisticated, and there are countless examples of AIs that are fed limited and biased data, producing limited, bias and prejudiced results.
Overcoming Questions on Emerging Technology
How does this information above help me with emerging technologies?
When somebody tells me that they are considering an AI technology, it enables me to start to look at exactly what that means.
Is it really an AI?
And if so – what type of an AI is it and where does it get the data it learns from?
I may not be able to know about every emerging technology, but understanding the fundamentals of the really interesting technologies provides me with what I need to be able to rapidly investigate other emerging technologies of interest.