If you've ever been to an expensive restaurant and ordered a familiar dish like, say, lasagna, but received a plate with five different elements arranged in a way that does not at all resemble what you know as lasagna, then you have probably tasted deconstructionism.
This approach to cuisine aims to challenge the way our brain makes associations, to break existing patterns of interpretation and, in so doing, to release unrealized potential. If the different elements work together harmoniously, it should be the best lasagna you've ever tasted.
So it is with 5G.
In principle, the 5th Generation network is deconstructed. Firstly,...
Because it demands so much manpower, cybersecurity has already benefited from AI and automation to improve threat prevention, detection and response. Preventing spam and identifying malware are already common examples. However, AI is also being used – and will be used more and more – by cybercriminals to circumvent cyberdefenses and bypass security algorithms. AI-driven cyberattacks have the potential to be faster, wider spread and less costly to implement. They can be scaled up in ways that have not been possible in even the most well-coordinated hacking campaigns. These attacks evolve in real time, achieving high impact rates.
In 2013, George F. Young and colleagues completed a fascinating study into the science behind starling murmurations. These breathtaking displays of thousands – sometimes hundreds of thousands – of birds in a single flock swooping and diving around each other, look from a distance like a single organism organically shape-shifting before the viewer’s eyes.
In their research article, Young et al reference the starling’s remarkable ability to “maintain cohesion as a group in highly uncertain environments and with limited, noisy information." The team discovered that the birds’ secret lay in paying attention to a fixed number of their neighbors –...
Recent events have confirmed that the cyber realm can be used to disrupt democracies as surely as it can destabilize dictatorships. Weaponization of information and malicious dissemination through social media pushes citizens into polarized echo chambers and pull at the social fabric of a country. Present technologies enhanced by current and upcoming Artificial Intelligence (AI) capabilities, could greatly exacerbate disinformation and other cyber threats to democracy.
Cybersecurity strategies need to change in order to address the new issues that Machine Learning (ML) and Artificial Intelligence (AI) bring into the equation. Although those issues have not yet reached crisis stage, signs are clear that they will need to be addressed – and soon – if cyberattackers are to be prevented from obtaining a decided advantage in the continuing arms race between hackers and those who keep organizations’ systems secure.
Attackers, often employing techniques like model querying, can gather valuable information regarding the target model’s structure, parameters, and learned features, thereby gaining insights into crafting inputs that the model fails to classify correctly. This reconnaissance allows attackers to meticulously modify malicious payloads or network traffic patterns, ensuring that they resemble benign inputs to the model, thus evading detection while maintaining their damaging capabilities.
Where AI, robots, IoT and the so-called Fourth Industrial Revolution are taking us, and how we should prepare for it are some of the hottest topics being discussed today. Perhaps the most striking thing about these discussions is how different people’s conclusions are. Some picture a utopia where machines do all work, where all people receive a universal basic income from the revenues machines generate and where, being freed from a need to work for wages, all people devote their time to altruism, art and culture. Others picture a dystopia where a tiny elite class uses their control of AI to horde all the world’s wealth and trap everyone else in inescapable poverty. Others take a broad view that sees minimal disruption beyond adopting new workplace paradigms.
Growing reliance on AI will not likely result in any of the three most common views of how AI will affect our future. Each view is founded on assumptions that fail to consider all the realities of AI.
That leaves us, however, with an important question as we plan our company’s – and our own – future in an increasingly AI-enabled world: “What will that world look like?”
AI impact on economy
The effect AI will have on the economy is massive. Such was the conclusion of a 2017 PwC report, Sizing the prize: What’s the real value of AI for your business...
Whether AI and the technologies it enables will reach their full potential depends on the workforce that will work alongside them. Yet the skills that that workforce needs to do this are in short supply. Rather than debating what to do about massive job losses from AI, discussion should focus on how best to prepare workers' skills for the types of jobs that they will need to fill.
A shifting job picture
A 2017 McKinsey’s report says that approximately half of all activities done by the current workforce could be automated. They point out, however, that this does not point to...
If you’ve read the many predictions about the future of AI, you’ve likely found them to be wildly different. They range from AI spelling doom for humanity, to AI ushering in Golden Age of peace, harmony and culture, to AI producing barely a blip on society’s path toward ever-greater technological achievement.
Those three views – dystopian, utopian and organic – present issues we need to consider as we move deeper toward an AI-integrated future. Yet they also contain exaggerations and false assumptions that we need to separate from reality.
The Dystopian View of AI Future
Those with a dystopian view of emerging technologies...
Ask people on the street how much AI uses today affect their lives, and most would probably answer that it doesn’t affect them right now. Some might say that it’s pure science fiction. Others might say that it may affect our future but isn’t used in our world today. Some might correctly identify a few ways it’s used in modern technology, such as voice-powered personal assistants like Siri, Alexa and Cortana. But most would be surprised to find out how widely it is already woven into the fabric of daily life.
In the summer of 1956, a small gathering of researchers and scientists at Dartmouth College, a small yet prestigious Ivy League school in Hanover, New Hampshire, ignited a spark that would forever change the course of human history. This historic event, known as the Dartmouth Workshop, is widely regarded as the birthplace of artificial intelligence (AI) and marked the inception of a new field of study that has since started revolutionizing countless aspects of our lives.