First, organisations need to determine the OEM provider’s approach to secure product management, from ideation to end of life. Determining this from the onset will help CIOs understand the core competencies of a product security officer, enabling them to cultivate the skills that are needed to productise security features, including product roadmap, planning and lifecycle management. Second, a focus on an integrated digital security approach, which looks holistically across IT and data, product, and operations-related technology, is needed. Currently, too many companies fail to see convergence, leaving key features at risk of being hacked – easily. Companies must look at their supplier risk. Supplier risk has, traditionally, focused on the data and IT infrastructure security of the supply chain, usually missing crucial elements, like product security, which needs to be factored in for a better securitisation. More importantly, some supply chain leaders are still using old vendor risk policies with OEMs that have increasingly become more digital, compromising the security of new products and devices – and once again leaving the window ajar for hackers to jump in.
With concerns rising from consumers and citizens and the increasing need for more ethics and trust, we need to put limits to ensure sound and fair use of AI technologies. The new EU Artificial Intelligence Act is beneficial because it will dictate the rules and force companies to examine the societal implications of rapid technology adoption. We must find a balance between technology benefits and risks. With the emergence of AI-enabled applications, traditional surveillance is transforming into smart video with new use cases that transcend what we consider surveillance today. Unfortunately, under the pretext of protection, camera operators risk exposing everyone within sight. We tend to overlook what data is collected or if it is secure for the greater good. Any technology use and innovation must be transparent and explainable. In 2020, amidst the COVID-19 disruption, France launched its contact tracing application, but its adoption was incredibly low because most citizens questioned the technology used and how the data was collected and stored. It forced the French government to rethink its approach and launch a new, “enriched” version of the application.
AI experts say emotion recognition systems are based on the assumption that humans manifest emotions in similar ways. Something as simple as a raised eyebrow may have different meanings in different cultures. Luke Stark, assistant professor in the faculty of information and media studies at the University of Western Ontario, said in an interview, “Emotions are simultaneously made up of physiological, mental, psychological, cultural, and individually subjective phenomenological components. No single measurable element of an emotional response is ever going to tell you the whole story. Philosopher Jesse Prinz calls this “the problem of parts.” In a recent essay for Nature, Professor Kate Crawford said many such algorithms are based on psychologist Paul Ekman’s study, conducted in 1960s, on nonverbal behaviour. According to him, there are six basic emotions– happiness, sadness, fear, anger, surprise and disgust. Ekman’s work and ideas have formed the basis for emotion-detection technologies used by giants such as Microsoft, IBM, and Amazon.
The biggest danger to a nascent MDM program is starting with the wrong objectives, even though those objectives can often sound quite right. The best practice here is to start with discrete and measurable business outcomes. A key acid test in this scenario is the ability to describe the outcomes of MDM in nontechnical terms that the business can understand and champion, both before and after they are delivered. If you can’t do this, then you likely have the wrong objective! ... My experience has shown that the vast majority of enterprises stumble at this point, but it’s a great method to get IT teams to see the issue that they are eventually going to have in maintaining momentum over the life of the MDM program. It is also helpful to consider business outcomes as divided along two axes as shown below: those that make money vs. those that save money, and current but sub-optimal vs. net new business processes. While most IT teams are capable of solving those use cases in the lower left quadrant on their own, true digital transformation resides in the upper right quadrant, and requires full participation from the business in identifying, describing, and quantifying these outcomes.
The issue for all cloud-based technologies is that they inherently lack a perimeter. This means that while you can have some protection, no simple method can determine which processes or persons are supposed to have access and keep out those who don’t have access rights. You need a combination of protective measures to ensure this. The other challenge is that manual processes can’t keep up with scaling, containers, and APIs. This is the whole point why what is now called infrastructure as code has caught on, in which infrastructure is managed and provisioned by machine-readable definition files. These files depend on an API-driven approach. This approach is integral to cloud-first environments because it makes it easy to change the infrastructure on the fly, but also makes it easy to create misconfigurations that leave the environment open to vulnerabilities. Speaking of containers, it is also hard to track them across the numerous cloud offerings that are available. Amazon Web services (AWS) alone has its Elastic Container Service, its serverless compute engine Fargate ...
The cryptocurrency sector needs an international framework that regulates it. This could be introduced to restrict its usage in all countries. At the moment, countries have a disjointed approach to regulating this sector – if they are even regulating it at all. Some countries such as Japan passed regulations in favor of cryptocurrencies, recognizing them as legal property, and the sector is under the entire supervision of the Financial Services Agency. Other countries like India are looking to ban this sector; in March 2021, the Indian government was due to introduce a digital currency bill that would have made cryptocurrencies illegal in the country. China is furthering its restrictions by prohibiting financial institutions from engaging in related transactions. The decision to restrict or ban the use of cryptocurrencies by countries is an attempt to limit the influence that the sector can have on the world economy, as they wouldn’t want to surrender the control of their economy to a decentralized currency. In the UK, the Bank of England released a discussion paper in which it explains that stablecoins should expected the same regulations as fiat currencies, in this report it also mentions it is exploring the potential introduction of its own digital currency, the “Britcoin”.
Let’s start with quantum mechanics. (What could be deeper?) The concept of superposition is infamously hard to render in everyday words. So, not surprisingly, many writers opt for an easy way out: They say that superposition means “both at once,” so that a quantum bit, or qubit, is just a bit that can be “both 0 and 1 at the same time,” while a classical bit can be only one or the other. They go on to say that a quantum computer would achieve its speed by using qubits to try all possible solutions in superposition — that is, at the same time, or in parallel. This is what I’ve come to think of as the fundamental misstep of quantum computing popularization, the one that leads to all the rest. From here it’s just a short hop to quantum computers quickly solving something like the traveling salesperson problem by trying all possible answers at once — something almost all experts believe they won’t be able to do. The thing is, for a computer to be useful, at some point you need to look at it and read an output. But if you look at an equal superposition of all possible answers, the rules of quantum mechanics say you’ll just see and read a random answer. And if that’s all you wanted, you could’ve picked one yourself.
One notable graduate of the list is Coinbase, the largest cryptocurrency exchange in the United States, which shook the industry and public markets with its April 14 Nasdaq debut – the largest direct listing in history. At one point during the opening day, Coinbase’s market cap exceeded $100 billion, setting a high bar for crypto startups still eyeing a public offering. Two of this year’s members, Kraken and Gemini (also cryptocurrency exchanges), have discussed going public in the future. But fintech is no longer just a tale of corporate success. Cryptocurrency lenders and exchanges are slowly giving way to the new hot shot of the class – decentralized finance (DeFi). An umbrella term for blockchain-based applications and protocols aiming to replace traditional financial intermediaries like banks and brokerages, DeFi skyrocketed in popularity and market capitalization over the past 12 months – from just over $1 billion in locked value in June 2020 to the current $67.9 billion. The largest among DeFi platforms are lending and borrowing protocols, such as Aave and MakerDAO, and decentralized exchanges like Uniswap and SushiSwap – all built on Ethereum.
In the “Rethinking Search” paper, the Google researchers call indexing the workhorse of modern search. But they envision doing away with indexing by using ever-larger language models that can understand more queries. The Knowledge Graph, for example, may serve up answers to factual questions, but it’s trained on only a small portion of the web. Using a language model built from more of the web would allow a search engine to make recommendations, retrieve documents, answer questions, and accomplish a wide range of tasks. The authors of the Rethinking Search paper say the approach has the potential to create a “transformational shift in thinking.” Such a model doesn’t exist. In fact the authors say it may require the creation of artificial general intelligence or advances in fields like information retrieval and machine learning. Among other things, they want the new approach to supply authoritative answers from a diversity of perspectives, clearly reveal its sources, and operate without bias. A Google spokesperson described LaMDA and MUM as part of Google’s research into next-generation language models and said internal pilots are underway for MUM to help people with queries on billions of topics.
Complex systems are inevitable. That’s the short answer, but we can expand on that a bit. As humans, we deal with complexity every day but the way we deal with it is to make mental models or abstractions about the complexity. In everyday life we deal with other complex systems such as automobile traffic, interaction with other people and animals, or even at a societal level. Decades of IT work has focused on making system models simple (e.g. the three tier web app) and that works great, when it is possible. For better or worse, the situations where that is possible are diminishing. We are entering a world where most and eventually nearly all software systems will be complex. What do we mean by “complex?” In this case we mean that a system is complex if it is too large, and has too many moving parts, for any single human to mentally model the system with predictive accuracy. Twenty years ago, I could write a content management system and basically understand all of the working parts. I could tell you, roughly, what a change to the performance of a query would have on the overall performance of the rest of the application, without having to actually try it. That is no longer the case.
Quote for the day:
"Leadership without character is unthinkable - or should be." -- Warren Bennis