Daily Tech Digest - January 18, 2023

How Will Cloud Computing Make Drone-Based Solutions Smarter

Due to its multi-sector application, cloud-processed data becomes a valuable resource. For governments and enterprises, it can become a viable source of revenue. As new urban and rural projects are commissioned, these high-resolution datasets are crucial for the planning process. It is useful in satisfying several government schemes such as PM Gram Sadak Yojna, PM Awas Yojna, Bharat FiberNet, and many more. For instance, SVAMITVA data along with DEM layers can help officials chart out the most optimum route of power lines for rural electrification. Similarly, digital terrain maps can help ascertain the natural slopes and assist engineers in designing efficient gravity-aided sewage networks. Cloud computing creates a centralised repository of GIS data which has the potential to drive innovation. Prior to cloud processing, data sharing of this kind had software and hardware limitations. However, the cloud brings forth unified data standards across the country making it hassle-free to access high-quality data. 


Why Cybersecurity Learning and Development is a Lifeline During Economic Downturn

More than a third of Europe’s largest tech companies are currently based in London and the UK remains a beacon of technological innovation. Yet, our research suggests that tech companies across the UK lack the technological skills they need to thrive and remain safe in the challenging months ahead. With DCMS’ UK Data Skills Gap report highlighting that the supply of university graduates with specialist technological skills is limited, companies must accept they have a larger role to play to increase digital skills internally rather than simply looking outside for ready-made talent. Business leaders must put adequate investment and support behind the upskilling of current employees to bolster cybersecurity talent and drive innovation. At the same time, employees should prioritize cybersecurity-related L&D to make themselves an invaluable asset to their organization – proactively identifying training opportunities with a quality L&D partner, one that aligns with their unique learning style and objectives. While there is no cookie-cutter approach to upskilling, employees should be granted access to a range of learning opportunities as part of a defined path of individual development


Artificial intelligence is here, but the technology faces major challenges in 2023

Whether AI will replace human jobs is less important than more vital ethical questions that need to be addressed in 2023, Bhargava says. The more pressing concern is "who's making these things and what questions are they asking about what biases are baked into it." When tools like ChatGPT are designed by teams with limited perspectives and diversity, the result is a tool lacking in perspective. "These systems that get built … are mirrors for our culture and our practices," says Bhargava. "Which way do they point and who's looking in them? No, they don't embed bias; they reflect it." There are some measures being taken to address the ethical questions around AI bias. Dakuo Wang, associate professor of art and design and computer science, says ChatGPT's real innovation is how it uses human data labelers during the process of training the AI to limit bias and increase accuracy. But even then, the technology is only as good as the data it's been trained on. Without the right data, the inaccuracies and limitations become much more obvious––and potentially dangerous.


Ransomware Looms Large on Third-Party Risk Landscape

First, it is important to have a clear understanding of the enterprise’s IT-related supply chain. This includes identifying all of the suppliers, subcontractors and other partners that process, transmit or store data used in the creation of the enterprise’s products and services. It is also important to understand the relationships between these different entities, as well as the specific products and services that each one provides, which results in a mapping. Once the supply chain has been mapped out, the next step is to identify the potential risks associated with each component of the chain. This includes both external and internal risks. External risks might include things like natural disasters, political instability or economic downturns. Internal risks might include things like employee turnover, equipment failure or data breaches. To identify these risks, enterprises should consider conducting a risk assessment. This will involve gathering and analyzing data from a variety of sources, including supplier contracts, insurance policies and regulatory compliance reports. 


DevOps and platform engineering

Despite many new teams and job titles springing up around DevOps, the platform engineering team is, perhaps, the most aligned to the mindset and objectives of DevOps. Platform teams work with development teams to create one or more golden pathways representing a supported set of technology choices. These pathways don't prevent teams from using something else. Pathways encourage alignment without enforcing centralized decisions on development teams. Rather than pick up tickets, such as "create a test environment", platform teams create easy-to-use self-service tools for the development teams' use. A critical part of platform engineering is treating developers as customers, solving their problems and reducing friction while advocating the adoption of aligned technology choices. ... Platform engineering alone doesn't provide a complete organizational view of performance. The DevOps structural equation model shows us capabilities for leadership, management, culture, and product that are outside a platform team's scope.


The Internet of Things: What security risks should you look out for?

With more businesses adopting the IoT and with smart homes becoming increasingly popular, focusing on cybersecurity alone is not nearly enough. It is also important to ensure the physical security of these devices. Most of these devices are generally quite small and easily accessible and could be tampered with or stolen. Once stolen, these devices may be taken to another location where they can be disassembled and probed for any data. These stolen devices might also be used to breach the IoT systems to which they are connected. Moreover, a hacker could plant a bug in a device without even having to move it. These issues highlight how important physical security is and why companies need to take steps to ensure the physical safety of their device network. There are several standards for cybersecurity today, and in a lot of cases, companies are even required by law to comply with some of these standards. Unfortunately, no such international standards exist for the IoT. All we have are best practices and recommendations. While steps are being taken to strengthen IoT security, we have yet to see a framework of recognized, international standards for IoT security


A Platform Team Product Manager Determines DevOps Success

As you build platforms out across the organization, Kersten said, it’s important to ensure that the feedback loops expand accordingly. “If you first build self-service for your own team it tends to be a simpler problem,” he said. “You’ve got the feedback loops already. You should, within a team, be talking to each other. Thinking about what you do as self-service and trying to build those abstractions for yourself, then you’re hopefully freeing up time.” As the platform embraces other teams, “You can’t do platform engineering if you don’t have some way of talking to the people who are actually going to be using the services you build, and working out what their actual problems are, because their problems will be different from yours.” The “State of DevOps” report’s findings underscore the need for a product manager with these “soft skills” to make platform engineering a success at scale. Sixty-one percent of respondents said strong communication skills were the most important product management skills for a platform team’s success.


Why Applying Constant Pressure on Yourself Can Significantly Improve Your Productivity and Success

As with so many things, working through pressure gets easier with practice. It's like a muscle or a skill — you have to train it to strengthen it. No one is walking into the weight room for the first time and squatting with 400 pounds, nor would it be recommended. Without training, you're only going to hurt yourself. There's a reason Lionel Messi is consistently chosen to take penalty kicks; he's taken so many before and has found a way to be comfortable and successful through what's arguably the most pressure-inducing moment of the game. He's been put in the situation before and risen to the challenge repeatedly in a way other players haven't mastered yet. ... Different people have different strategies, but something I've found crucial is recognizing the adrenaline that comes with the feeling of pressure. On a physical level, the fear you might feel during those moments is not all that different from the feeling you get when you're excited, like climbing the highest point of a rollercoaster. The trick is channeling that adrenaline towards the latter and using it to fuel excitement rather than fear. 


AI and Human Creativity - Could it Lead to General Cognitive Decline?

AI might be able to generate new and novel ideas by remixing what is fed into it, but that doesn't necessarily help the humans who create the input improve their access to their own creative powers. It's not just about the quality of what is generated, it is also about improving our thinking skills. Creativity might be innate but we can always get better at inviting its presence. In my experience, that's a mental skill that improves with practice. And highlighting the importance of the human element in creativity is all well and good but creators in a hurry could be ever more inclined to just press a button to get the output to meet a deadline instead of going inside, reflecting, and finding that creative state necessary to doing it on their own. And, of course, yes, AI is a tool and it is about how you use it. I think it is also about how you frame its purpose and how that relates to our values as a society. Consider the relative importance of the intrinsic value of creativity versus a context that gives more weight to the speed of delivery and amount of output.


Enterprise Architecture Must Evolve for Digital Transformation

Current enterprise architectures (EAs) were being developed in the 1980s, and while there have been iterations of them since, widely adopted EAs are still utilizing the same architectural foundations as when they were established. Take for example The Open Group Architecture Framework (TOGAF), which had its first version published in 1995. The foundation still consists of the same four architectural domains: business, application, data, and technical. That foundation was laid before the internet existed. And this is part of the problem. Today it is not uncommon to equate technology with the worldwide connection that is so ingrained in our everyday lives. While TOGAF has managed to support businesses up to now by versioning, including integrating the internet and new capabilities into its architecture, it wasn’t purpose-built for today’s possibilities—digital business. Our understanding of what’s possible drives the need for modernizing enterprise architecture.



Quote for the day:

"No man can stand on top because he is put there." -- H. H. Vreeland

Daily Tech Digest - January 17, 2023

The 7 new rules of IT leadership

There’s no question that stable, strong IT infrastructure is more essential now than ever, yet CIOs can’t succeed by making a steady state the-end-all-be-all. Instead, they must be change agents who are not only OK with constant change but also advocate for it while ensuring infrastructure can scale and support that change. “Success is managing change versus moving from one fixed stone to another,” Cameron says. “So for CIOs to be really successful in this new environment, they need to be able to make change continuous, and they have to find ways as leaders to help their people understand how to do that.” He adds: “That means making structural changes.” There is mindset shift here but equally important — if not more so — is the need to change how work actually happens. One of the most prominent adjustments for IT is the move from approaching technology delivery as projects — something that’s planned, executed, and completed — to a product mindset that embraces incremental improvements delivered throughout a digital tool’s lifecycle.


Essential skills for becoming a CTO

The easiest way into a management and leadership role is to become an engineering manager before you become a CTO. Assuming you have that engineering manager role in your company, there are a whole bunch of great books on engineering management, such as The Pragmatic Programmer by David Thomas and Andrew Hunt. Another good one is Accelerate, which show you how to measure software delivery performance. A good general technical management book is The Manager’s Path by Camille Fournier, while The Five Dysfunctions of a Team by Patrick Lencioni is very good talking about psychological safety and intra relations within a team. ... Possibly soft skills have been neglected in the past. Nobody should be trying to take on a management or leadership position without any understanding of what it means to deal with people and motivate them. Empathy, communication and creating an environment of psychological safety so that people can really push the boundaries of what they work on without fear of reprisal, are really important in a management role. 


AI Lawyer: It's Starting as a Stunt, but There's a Real Need

advocates say AI's ability to sort information, spot patterns and quickly pull up data means that in a short time, it could become a "copilot" for our daily lives. Already, coders on Microsoft-owned GitHub are using AI to help them create apps and solve technical problems. Social media managers are relying on AI to help determine the best time to post a new item. Even we here at CNET are experimenting with whether AI can help write explainer-type stories about the ever-changing world of finance. So, it can seem like only a matter of time before AI finds its way into research-heavy industries like the law as well. And considering that 80% of low-income Americans don't have access to legal help, while 40% to 60% of the middle class still struggle to get such assistance, there's clearly demand. AI could help meet that need, but lawyers shouldn't feel like new technology is going to take business away from them, says Andrew Perlman, dean of the law school at Suffolk University. It's simply a matter of scale. "There is no way that the legal profession is going to be able to deliver all of the legal services that people need," Perlman said.


The EU wants to regulate your favorite AI tools

Lawmakers in Europe are working on rules for image- and text-producing generative AI models that have created such excitement recently, such as Stable Diffusion, LaMDA, and ChatGPT. They could spell the end of the era of companies releasing their AI models into the wild with little to no safeguards or accountability. These models increasingly form the backbone of many AI applications, yet the companies that make them are fiercely secretive about how they are built and trained. We don’t know much about how they work, and that makes it difficult to understand how the models generate harmful content or biased outcomes, or how to mitigate those problems. The European Union is planning to update its upcoming sweeping AI regulation, called the AI Act, with rules that force these companies to shed some light on the inner workings of their AI models. It will likely be passed in the second half of the year, and after that, companies will have to comply if they want to sell or use AI products in the EU or face fines of up to 6% of their total worldwide annual turnover.


CFOs zero in on digital transformation

Evaluating the results of one’s digital transformation efforts is a constant challenge for financial leaders, who must also deal with finding and retaining digital talent as well as aggregating all of the information one needs across their organization in order to build a technology roadmap, Horvat said. CFOs currently are focusing in on the finance function when it comes to their digital transformation efforts, therefore. “What they’re prioritizing is really maturing that FP&A function, getting FP&A-specific tools to platform their planning and budgeting,” Horvat said. Ninety percent of CFOs surveyed pointed to evaluating their finance strategy, scope and design as their top priority for 2023, according to the survey, while 83% pointed to planning finance transformation efforts. It is also important to note that CFOs are personally involved in their organizations’ digital transformation efforts both broadly and within the finance function, Horvat said. “I think a lot of it has to do with owning that strategy piece of it, to make sure that that it’s advancing in a way that serves the interests of the organization,” he said.


How to succeed in cyber crisis management and avoid a Tower of Babel

Organizations need to develop a working assumption of the main threat factors, targets, and practical ramifications of a cyberattack. The organization should also identify the main scenarios they may need to deal with, including a situation that results in shutting down the main business activities and a situation in which sensitive information is leaked or stolen. These should be made based on the nature of the organization, the sector in which it operates, its geographic location and history of cyber events. These scenarios should be updated constantly as the business and the threats change and grow. Publicly listed companies should also be aware of the risks to image and finances that could come with attacks as regulations increasingly require reporting of cyber incidents. In addition, each organization needs to determine its guiding principles, by answering key questions like whether it would negotiate with attackers and whether they would ever consider paying a ransom. It also needs to decide who will mitigate an attack – an internal team or an hired third party. 


How AI chatbot ChatGPT changes the phishing game

If attackers ask ChatGPT directly for ChatGPT to suggest some idea for a phishing email, they'll get a warning message that this topic is "not appropriate or ethical." But if they ask for suggestions for a marketing email, or an email to tell people about a new human resources webpage, or to ask someone to review a document prior to a meeting—that, ChatGPT will be very happy to do. ... ChatGPT is not limited to English. It says it knows about 20 languages, including Russian, Standard Chinese, Korean, but people have tested it with nearly 100. That means you can explain what you need in a language other than English, then ask ChatGPT to output the email in English. ChatGPT is blocked in Russia, but there's plenty of discussion in Russian explaining how to get to it via proxies and VPN services and how to get access to a foreign phone number to confirm your location. ... "ChatGPT and large language models in general will be used for benign content much more than for malicious content," says Andy Patel, researcher at WithSecure, who recently released a research report about hackers and GPT-3, an earlier version of ChatGPT.


Greener supply chains call for IoT innovation

With businesses and CEOs facing demands for environmental change and enhanced revenue growth simultaneously, supply chains need to be revolutionised. This can be achieved by strategically integrating the right systems and sensors to unlock opportunities, especially those that reduce energy consumption and waste throughout product lifecycles. The Gartner study that unearths the CEO findings is entitled 2022 CEO Survey: Sustainability and ESG Become Enduring Change. It says CEOs are also becoming increasingly aware that new technologies have a crucial role to play in supporting sustainability improvements. Artificial Intelligence (AI) was identified by 18% of respondents, putting it at the top of the list of sustainability supporting technologies, with digitalisation ranking second with 11%. While these findings indicate a growing awareness of technology’s potential to support sustainability, only 4% of CEOs identified IoT-related technologies as a primary example, when in fact it is set to be a major driver.


7 tell-tale signs of fake DevOps

An organization that hyper-focuses on a tool- and technology-centric DevOps culture, rather than on people and processes, is 180 degrees out of sync. “It’s crucial to assess current business practices and needs,” says Mohan Kumar, senior architect at TEKsystems, an IT service management firm. Kumar recommends prioritizing teams. “Instill DevOps culture into communication, collaboration, feedback collection, and analysis,” he suggests. “An experiment-friendly environment that allows developers to fail fast, recover fast, and learn faster builds a blame-free culture within the organization.” Kumar also suggests nurturing a stream of creative ideas by tapping into teams’ collective intelligence. DevOps adoption is an iterative process, so the CIO should begin by evaluating the development team’s current state and then gradually building a strategy of continuous improvement involving people, processes, and tools that can evolve along with future needs and developments. “Ultimately, creativity is a muscle that must be exercised continuously to grow,” Kumar observes.


Digital transformation: 4 tips to keep it human-centered

Rather than diving head-first into digital transformation, it is important to take a step back, consider these factors, and act accordingly. By taking a human-centered approach to digital transformation initiatives, organizations can use technology to transform the lives of the people they serve. We recently saw one of our customers create significant positive change when they considered the people involved in a necessary technology upgrade. ... Human-centered digital transformation requires companies to recognize that people lay the foundation for digital transformation and, therefore, must take the necessary steps to create a seamless experience throughout the process. The shift to a digital-first business environment can be challenging to all stakeholders as they are expected to adapt to rapid changes at an organizational level. Keeping pace with the changing needs of employees and customers will alleviate this burden and foster a strong company culture.v



Quote for the day:

"Practice isn't the thing you do once you're good. It's the thing you do that makes you good." -- Malcolm Gladwell

Daily Tech Digest - January 16, 2023

Why Cyber Insurance Will Revive Cyber Business Intelligence

Because cyber insurance deals with risk that has been transferred, there is a subtle but powerful distinction from the need to understand your own risk. In many cases, insurance companies that can curate low risk pools and a favorable loss ratio can significantly improve profits. That’s not the only way they make money, but it is one way. Now enter the resurgence of cyber business intelligence. While concepts like cyber threat intelligence and risk assessments focus on preventing loss, cyber business intelligence aligns with concepts already utilized elsewhere in a business environment. “What pieces of knowledge and trends can I follow – that by following them I can be more profitable?” This is a different mindset. This is one anchored on the idea that “you’ve got to spend money to make money.” This drives a culture and enthusiasm that can foster better innovation, better results and faster progress. There’s another key word there. Business. Not only relevant to technical experts, this information is equally relevant to business leaders and key decision makers. 


No Black Boxes: Keep Humans Involved In Artificial Intelligence

Not all AI needs are created equal. For instance, in low-stakes situations, such as image recognition for noncritical needs, it’s not likely necessary to understand how the programs are working. However, it is critical to understand how code operates and continues to develop in situations with important outcomes, including medical decisions, hiring decisions, or car safety decisions. It’s important to know where human intervention is needed and when it’s necessary for input and intervention. Additionally, because educated men mainly write AI code, according to (fittingly) the Alan Turing Institute, there’s a natural bias to reflect the experiences and worldviews of those coders. Ideally, coding situations in which the end goal implicates vital interests need to focus on “explainability” and clear points where the coder can intervene and either take control or adjust the program to ensure ethical and desirable end performance. Further, those developing the programs—and those reviewing them—need to ensure the source inputs aren’t biased toward certain populations.


3 Things New Engineering Managers Should Focus On

A high-performing team consist of engaged, happy, and motivated people — truly getting the best out of your team means getting the best from the individual. So what does that mean for you? It means quickly getting up to speed on each team member’s background, experiences, portfolio, strengths, growth areas, and goals. How do they want to be recognized? What style of feedback do they prefer? How do they learn best? What goals do they have? The more nuance you can learn about each person, the more successful you will be in leading them. By setting up 1:1 meetings, you’ll be able to learn about each person on your team, coach them, and discuss their progress towards goals. ... Instead of rolling in making changes, spend this time learning about the processes your team is already using. What are the team’s goals? How do they work together and separately? How does your team integrate with other teams — or not, currently, and is that an issue? Who are the customers and partners? 


Post-quantum cybersecurity threats loom large

Considering this net-positive shift in budgets, it’s no surprise that 74% of enterprise leaders have adopted or are planning to adopt quantum computing. Interestingly, nearly 30% of respondents that have adopted or plan to adopt quantum computing expect to see a competitive advantage due to quantum computing within the next 12 months. This represents more than a sevenfold increase year-over-year from 2021 (4%) and highlights the growing commitment to near-term quantum computing initiatives as the technology continues to mature. “We’re getting a unique glimpse into the quantum adoption mindset of global enterprise executives, which mirrors what we’re seeing in our customer base,” said Christopher Savoie, CEO of Zapata Computing. “These findings become more interesting when compared to the data we saw last year. Over the past 12 months, we’ve seen significant new developments in technology, particularly generative AI, and near-term advantages from quantum-inspired technologies that are fueling the momentum for quantum computing planning and adoption.


Data will be the king in 2023!

The importance of cyber-risk governance is no longer limited to CISOs anymore; conversations are deepening on how organizations can ensure data resiliency, adaptability, and security at the C-Suite level. As we approach 2023, business leaders will need to assess their data infrastructure with a five-point focus approach — scalability, flexibility, agility, security, and cost. Data protection and management will become a top-tier priority for business leaders. Significant amounts of the IT budget spend will be allocated and invested in technologies to prevent, detect, and recover from inevitable cyberattacks not if, but when they occur. A study by PWC stated that 62% of respondents expect their security budget to increase by as much as 10% in 2023. As cloud investments will continue to soar high in 2023, the parallel shifts in the threat landscape will also become more sophisticated. As per the recent Commvault-IDC survey, over 28% of Indian enterprises stated they will have multiple private and/or public cloud environments and migrate workloads and data between them by 2023. Thus, protection and data recoverability will be essential components in the enterprise security toolbox of organizations.


What to expect from SASE certifications

Compared to other networking certifications, like the CCNA, which is more about how to operate the technology, Cato’s SASE and SSE certifications are high-level overviews. “Our certification is more about what SASE and SSE mean, what are the implications, and what it means to different IT teams,” says Webber-Zvik. “You see presentations, whiteboards, reading materials, and at the end of each section, there is a quiz. When you complete all the sets and pass all the tests, you get the certification.” The majority of the material covered is not Cato-specific, he says. However, the certification does use Cato’s implementation of SASE and SSE in its examples. Take, for instance, single-pass processing. According to Gartner, this is a key characteristic of SASE, and it means that networking and security are integrated. “We explain it according to Gartner’s definition,” Webber-Zvik says. “We also provide an example of Cato’s implementation and use that to articulate what single-pass processing can look like when it’s outside Gartner theory and in real life.” There is no charge for Cato’s certification training and exam, but that might change, he says.


How to Overcome Challenges in an API-Centric Architecture

There are several potential solutions. If the use case allows it, the best option is to make tasks asynchronous. If you are calling multiple services, it inevitably takes too long, and often it is better to set the right expectations by promising to provide the results when ready rather than forcing the end user to wait for the request. When service calls do not have side effects (such as search), there is a second option: latency hedging, where we start a second call when the wait time exceeds the 80th percentile and respond when one of them has returned. This can help control the long tail. The third option is to try to complete as much work as possible in parallel by not waiting for a response when we are doing a service call and parallelly starting as many service calls as possible. This is not always possible because some service calls might depend on the results of earlier service calls. However, coding to call multiple services in parallel and collecting the results and combining them is much more complex than doing them one after the other.


The CIO’s role is changing – here’s why

Faced with an increasing number of threats both internal and external, CIOs have had to prioritise areas such as cyber security in recent years just to keep their businesses protected. In doing so, they’ve also been charged with embracing the latest technological developments such as artificial intelligence, big data analytics, and the plethora of connected devices that comprise the burgeoning Internet of Things; technologies that will foster greater innovation and provide their businesses with a more competitive edge. Increasingly, however, it won’t necessarily be an organisation’s IT department that drives the adoption of emerging technologies. More often, other areas of the business will now be in a better position to identify the innovative technology that will deliver greater customer value, and the specific use cases in which it can be implemented. 77 per cent of CIOs surveyed by Gartner claimed that IT staff are primarily providing innovation and collaboration capabilities, compared with 18 per cent stating that non-IT personnel are providing these tools. 


SRE in 2023: 5 exciting predictions

Whether it’s AI assistance, VR immersion, or web3 decentralization, 2023 will continue to push organizations to adopt cutting-edge technology. It’s a challenge to guess which of these ideas will flourish and which will flounder, but either way, having a reliable foundation will be necessary. Adopting even the most successful new ideas at scale will bring new obstacles and types of incidents. These growing pains of new technologies will require new approaches. As organizations experience these growing pains, they’ll turn to SRE to keep their customers happy while they adjust. Incident retrospectives can help teams handle new sources of incidents quickly, while a reliability mindset can keep customer happiness the number one priority. Reliability is the subjective experience of users based on their expectations of the service. While this is a helpful way to align priorities with customer needs, 2023 will bring an even more holistic definition of reliability. Organizations will start thinking about the reliability of their system, not just in terms of their users’ experiences, but as a complete package covering everything starting from development ideation.


10 data security enhancements to consider as your employees return to the office

The unauthorized disclosure of data isn’t always the result of malicious actors. Often, data is accidentally overshared or lost by employees. Keep your employees informed with cyber security education. Employees who go through regular phishing tests may be less likely to engage with malicious actors over email or text messaging. ... An inventory of software, hardware and data assets is essential. Having control over the assets with access to your corporate environment starts with an inventory. Inventories can be a part of the overall vulnerability management program to keep all assets up to date, including operating systems and software. Furthermore, a data inventory or catalogue identifies sensitive data, which allows appropriate security controls like encryption, access restrictions and monitoring to be placed on the most important data. ... Reducing your overall data footprint can be an effective way of reducing risk. 



Quote for the day:

"Smart leaders develop people who develop others, don't waste your time on those who won't help themselves." -- John C Maxwell

Daily Tech Digest - January 15, 2023

How confidential computing will shape the next phase of cybersecurity

At its core, confidential computing encrypts data at the hardware level. It’s a way of “protecting data and applications by running them in a secure, trusted environment,” explains Noam Dror—SVP of solution engineering at HUB Security, a Tel Aviv, Israel-based cybersecurity company that specializes in confidential computing. In other words, confidential computing is like running your data and code in an isolated, secure black box, known as an “enclave” or trusted execution environment (TEE), that’s inaccessible to unauthorized systems. The enclave also encrypts all the data inside, allowing you to process your data even when hackers breach your infrastructure. Encryption makes the information invisible to human users, cloud providers, and other computer resources. Encryption is the best way to secure data in the cloud, says Kurt Rohloff, cofounder and CTO at Duality, a cybersecurity firm based in New Jersey. Confidential computing, he says, allows multiple sources to analyze and upload data to shared environments, such as a commercial third-party cloud environment, without worrying about data leakage.


Not All Multi-Factor Authentication Is Created Equal

Many legacy MFA platforms rely on easily phishable factors like passwords, push notifications, one-time codes, or magic links delivered via email or SMS. In addition to the complicated and often frustrating user experience they create, phishable factors such as these open organizations up to cyber threats. Through social engineering attacks, employees can be easily manipulated into providing these authentication factors to a cyber criminal. And by relying on these factors, the burden to protect digital identities lies squarely on the end user, meaning organizations’ cybersecurity strategies can hinge entirely on a moment of human error. Beyond social engineering, man-in-the middle attacks and readily available toolkits make bypassing existing MFA a trivial exercise. Where there is a password and other weak and phishable factors, there is an attack vector for hackers, leaving organizations to suffer the consequences of account takeovers, ransomware attacks, data leakage, and more. A phishing-resistant MFA solution completely removes these factors, making it impossible for an end user to be tricked into handing them over even by accident or collected by automated phishing tactics.


Europe’s cyber security strategy must be clear about open source

While the UK government has tried to recognise the importance of digital supply chain security, current policy doesn’t consider open source as part of that supply chain. Instead, regulation or proposed policies focus only on third-party software vendors in the traditional sense but fail to recognise the building blocks of all software today and the supply chain behind it. To hammer the point, the UK’s 11,000+ word National Cyber Security Strategy does not include a single reference to open source. GCHQ guidance meanwhile remains limited, with little detailed direction beyond ‘pull together a list of your software’s open source components or ask your suppliers.’ ... In this sense, the EU has certainly been listening. The recently released Cyber Resilience Act (CRA) is its proposed regulation to combat threats affecting any digital entity and ‘bolster cyber security rules to ensure more secure hardware and software products’. First, the encouraging bits: the CRA doesn’t just call for vendors and producers of software to have (among other things) a Software Bill of Materials (SBoM) - it demands companies have the ability to recall components. 


Eight Common Data Strategy Pitfalls

Lack of data culture: Data hidden within silos with little communication between business units leads to a lack of data culture. Data Literacy and enterprise-wide data training is required to allow business staff to read, analyze, and discuss data. Data culture is the starting point for developing an effective Data Strategy.The Data Strategy is too focused on data and not on the business side of things: When businesses focus too much on just data, the Data Strategy may just end up serving the needs of analytics without any focus on business needs. An ideal Data Strategy enlists human capabilities and provides opportunities for training staff to carry out the strategy to meet business goals. This approach will work better if citizen data scientists are included in strategy teams to bridge the gap between the data scientist and the business analyst.Investing in data technology before democratizing data: In many cases, Data Strategy initiatives focus on quick investment in technology without first addressing data access issues. If data access is not considered first, costly technology investments will go to waste. 


Here's Why Your Data Science Project Failed (and How to Succeed Next Time)

Every data science project needs to start with an evaluation of your primary goals. What opportunities are there to improve your core competency? Are there any specific questions you have about your products, services, customers, or operations? And is there a small and easy proof of concept you can launch to gain traction and master the technology? The above use case from GE is a prime example of having a clear goal in mind. The multinational company was in the middle of restructuring, re-emphasizing its focus on aero engines and power equipment. With the goal of reducing their six- to 12-month design process, they decided to pursue a machine learning project capable of increasing the efficiency of product design within their core verticals. As a result, this project promises to decrease design time and budget allocated for R&D. Organizations that embody GE's strategy will face fewer false starts with their data science projects. For those that are still unsure about how to adapt data-driven thinking to their business, an outsourced partner can simplify the selection process and optimize your outcomes.


5 Skills That Make a Successful Data Manager

The role of a data manager in an organization is tricky. This person is often neither an IT guy who implements databases on his/her own, nor a business guy who is actually responsible for data or processes (that’s rather a Data Steward’s area of responsibility). So what’s the real value-add of a data manager (or even a data management department)? In my opinion, you need someone who is building bridges between the different data stakeholders on a methodical level. It’s rather easy to find people who consider themselves as experts for a particular business area, data analysis method or IT tool, but it is rather complicated to find one person who is willing to connect all these people and to organize their competencies as it is often required in data projects. So what I am referring to are skills like networking, project management, stakeholder management and change management HIwhich are required to build a data community step-by-step as backbone for Data Governance. Without people, a data manager will fail! So in my opinion, a recruiter who seeks for data managers should not only challenge technical skills but also these people skills.


Why distributed ledger technology needs to scale back its ambition

There is nonetheless an expectation that DLT can prove to be a net good for financial markets. Foreign exchange markets have an estimated $8.9 trillion at risk every day due to the final settlement of transactions between two parties taking days. This is why the Financial Stability Board and the Committee on Payments and Market Infrastructures have focused their efforts on enhancing cross-border payments with a comprehensive global roadmap. Part of this roadmap includes exploring the use of DLT and Central Bank Digital Currencies. The problem may not be the technology itself, but the aim of replacing current technology systems with distributed networks. DLT networks are being designed to completely overhaul and replace legacy technology that financial markets depend on today. Many pilot projects, such as mBridge and Jura, rely on a single blockchain developed by a single vendor. This introduces a single point of trust, and removes many of the benefits of disintermediation. 


Why is “information architecture” at the centre of the design process?

The information architecture within a design (both process and output) makes the balancing within the equation possible. It also ensures the equation is “solvable” by other people. It does this by introducing logical coherence. It ensures words, images, shapes and colours are used consistently. And it ensures that as we move from idea to execution, we stay true to the original intent — and can clearly articulate it — so that we can meaningfully measure the effectiveness of our design. Without this internal coherence and confidence that our output is an accurate, reliable test of our hypothesis, we’re not doing design. The power of design which has a consistent information architecture is that if we find that our idea (which we translate to intent, experiments and experiences) is not equal to the problem, we can interrogate every part of the equation. We may have made a mistake in execution. Maybe our idea wasn’t quite right. Or even more powerfully, maybe we didn’t really understand the problem fully. 


Improve Your Software Quality with a Strong Digital Immune System

You can improve your software quality with a strong digital immune system since a digital immune system is designed to guard against cyberattacks and other sorts of hostile activities on computer systems, networks, and hardware. It operates by constantly scanning the network and systems for indications of prospective threats and then taking the necessary precautions to thwart or lessen such dangers. This can entail detecting and preventing malicious communications, identifying and containing compromised devices, and patching security holes. A robust digital immune system should offer powerful and efficient protection against cyber threats and assist individuals and companies in staying secure online. Experts in software engineering are searching for fresh methods and strategies to reduce risks and maximize commercial impact. The idea of “digital immunity” offers a direction. It consists of a collection of techniques and tools for creating robust software programmes that provide top-notch user experiences. With the help of this roadmap, software engineering teams may identify and address a wide range of problems, including functional faults, security flaws, and inconsistent data.


Security Bugs Are Fundamentally Different Than Quality Bugs

For each one of the types of testing listed above, a different skillset is required. All of them require patience, attention to detail, basic technical skills, and the ability to document what you have found in a way that the software developers will understand and be able to fix the issue(s). That is where the similarities end. Each one of these types of testing requires different experience, knowledge, and tools, often meaning you need to hire different resources to perform the different tasks. Also, we can’t concentrate on everything at once and still do a great job at each one of them. Although theoretically you could find one person who is both skilled and experienced in all of these areas, it is rare, and that person would likely be costly to employ as a full-time resource. This is one reason that people hired for general software testing are not often also tasked with security testing. Another reason is that people who have the experience and skills to perform thorough and complete security testing are currently a rarity. 



Quote for the day:

"Leadership is particularly necessary to ensure ready acceptance of the unfamiliar and that which is contrary to tradition." -- Cyril Falls

Daily Tech Digest - January 14, 2023

How to build the most impactful engineering team without adding more people

Teams celebrate a 10% improvement in efficiency when they should be looking for a 10x improvement in efficiency. Identify key moments in your product lifecycle when it makes sense to step back and identify the substantial changes that can supercharge productivity. My company builds connectors into a huge variety of data sources. At one time, we were writing 5,000 lines of code to create a single connector, which was not sustainable. Now, a single engineer can build a connector in a week with 100 lines of code. We achieved this by designing a new development framework that allows us to exploit commonalities across the connectors we build and by greatly reducing dependencies among engineers. As soon as one engineer needs input from six other engineers to complete a task, productivity takes a massive hit. Here's a thought experiment you can run to help find your own 10x improvement: Imagine your workload scales 10x overnight, and you absolutely must meet this increase without hiring more engineers or working additional hours. How do you do it? An out-of-the-box thought exercise like this can help you radically improve your approach.


Your project is unique, so why make it replicable?

While replicability isn’t as important as delivery in a modern environment, where software is often unique to the organisation, it is important to be able to prove effectiveness. At Catapult, we use an upskilling system that we call the Lighthouse Model; whereby we identify a team from the ground-up that can act as a model for the rest of the business and focus first on developing them as a group. By demonstrating the effectiveness of agile as a foundation on which to build software, a Lighthouse team creates a fertile environment, which removes blocks and gathers data to help develop buy-in across the board. All this works. In 2018, the Standish Group established that ‘Agile projects’ are twice as likely to succeed than waterfall projects. In the same study the company notes that 28 per cent of Waterfall projects fail, while only eleven per cent of agile projects meet the same fate. In this context, the metrics of success went beyond whether the project was on time and on budget and considered its outcomes and impact. They looked beyond the delivery against the plan to include the value delivered and customer satisfaction. In essence, they looked for the real meaning of success.


A New Definition of Reliability

The first thing you might assume is that reliability is synonymous with availability. After all, if a service is up 99% of the time, that means a user can rely on it 99% of the time, right? Obviously, this isn’t the whole story, but it’s worth exploring why. For starters, these simple system health metrics aren’t really so “simple.” Starting with just the Four Golden Signals, you’ll end up with the latency, resource saturation, error rate, and uptime of all your different services. For a complex product, this adds up to a whole lot of numbers. How do you combine and weigh all these metrics? Which are the important ones to watch and prioritize? Judging things like errors and availability can be difficult too. Gray failure, or when a service isn’t working completely but hasn’t totally failed either, can be hard to capture with quantitative metrics. When do you decide when a service is “available enough?” What about a situation where your service performs exactly as intended, but doesn’t align with your customers’ expectations? How do you capture these in your picture of system health? Clearly, there needs to be another layer to this definition of reliability!


Architecture Pitfalls: Don’t use your ORM entities for everything — embrace the SQL!

I suspect one of the greatest lies ever told in web application development is that if you use an ORM you can avoid writing and understanding SQL, “it’s just an implementation detail”. That might be true at first, but once you go beyond the basics that falls away quickly. ... It’s much better to let the database do this kind of filtering. After all, it’s what all of the clever folk who work on databases spend a lot of time and effort optimising. For most ORMs you have the option of writing analogues to SQL which can get you quite a long way. For example, JPA has JPQL and Hibernate has HQL. These let you build abstracted queries that should work on all databases that your ORM supports. The implication of this is that your team needs to embrace SQL and understand how to use it, rather than avoiding it by using application code instead. To dispel a common source of anxiety on this: you don’t need to be a SQL guru to get started and become familiar with what you will need for the vast majority of your implementation requirements. There are also excellent resources and books available, I will link some below. 


How To Build A Network Of Security Champions In Your Organization

An SCP enlists employees from all different disciplines across a company (HR, marketing, finance, etc.) for focused cybersecurity training and guidance. These security champions then become the contact point and voice for cybersecurity within their various departments or offices alongside their main role. They help to advise on, embed and reinforce good security practices with their colleagues. This makes security advice more relatable and accessible, avoiding the “us versus them” attitude that can sometimes exist between employees and traditional enterprise security teams. It’s easier for a colleague to explain a security risk or issue to a co-worker than it is for a security pro whom the co-worker has never met. The security champion’s role is a little like that of a department’s fire marshal. In the same way that the marshal doesn’t need to be a specialist in firefighting, the security champion doesn’t need to be an IT or infosec pro; they just need to know how their colleagues work, what the security risks are within their department or team and the common-sense steps to take to mitigate those risks. 


Companies warned to step up cyber security to become ‘insurable’

Carolina Klint, risk management leader for continental Europe for insurance broker Marsh, and one of the contributors to the report said that insurance companies were now coming out and saying that “cyber risk is systemic and uninsurable”. That means, in future, companies may not be able to find cover for risks such as ransomware, malware or hacking attacks. “It’s up to the insurance industry and to the capital markets whether or not they find the risk palatable,” she said in an interview with Computer Weekly, “but that is the direction it is moving in.” In recent days, cyber attacks have disrupted the international delivery services of the Royal Mail and infected IT systems at the Guardian newspaper with ransomware. The Global risks report rates cyber warfare and economic conflict as more serious threats to stability than the risks of military confrontation. “There is a real risk that cyber attacks may be targeted at critical infrastructure, health care and public institutions,” said Klint. “And that would have dramatic ramifications in terms of stability.”


6 Roles That Can Easily Transition to a Cybersecurity Team

Software engineers possess various technical skills, including coding and software development. They also understand the complexities involved in developing a secure application. This makes them well-suited for different types of cybersecurity tasks. ... They should also be familiar with various cyber threats, such as malware and phishing. Additionally, since software development is constantly evolving, software engineers should be prepared to keep up with the latest trends to remain competitive. ... Network architects possess a strong knowledge of networking technologies and are proficient in setting up secure networks. While not all security roles require a deep technical understanding, network architects are well-suited to design secure networks and implement protection measures. They can also review existing systems for vulnerabilities and recommend solutions to mitigate risks. ... They should also be familiar with emerging technologies and techniques related to cybersecurity, such as artificial intelligence (AI) and machine learning (ML). Another important skill for network architects is identifying and differentiating between legitimate and malicious traffic signals.


Getting started with data science and machine learning: what architects need to know

In almost every scientific field, the role of the data scientist is actually played by a physicist, chemist, psychologist, mathematician (for numerical experiments), or some other domain expert. They have a deep understanding of their field and pick up the necessary techniques to analyze their data. They have a set of questions they want to ask and know how to interpret the results of their models and experiments. With the increasing popularity of industrial data science and the rise of dedicated data science educational programs, a typical data scientist's training lacks domain-specific training. ... There are two opposing approaches. One is to know which tool to use, pick up a pre-implemented version online, and apply it to a problem. This is a very reasonable approach for most practical problems. The other is to deeply understand how and why something works. This approach takes much more time but offers the advantage of modifying or extending the tool to make it more powerful.


ZeroOps Helps Developers Manage Operational Complexity

The first thing to take into account when implementing ZeroOps for your business: You must consider everything that isn’t directly driving value. Who should be doing those tasks? You want your core staff to be focused on the business, so it’s worth considering a managed service provider as a partner. This can help provide your team with the skills and support they need, while allowing them to focus on their core competencies. The right tools can help your team be more productive than you ever imagined, without hiring new full-time employees. ... More agile, with less pressure and responsibility to handle “the little things” that we know aren’t so little. Imagine how your team members could shine when supported by experts to assist them so they can focus on providing value. Imagine being able to deliver projects much more quickly so delivery expectations actually aligned with what was realistic. ... Managed services can help make your team more productive and capitalize on their talent. When you struggle with a problem, it’s likely that your managed service provider has already solved it for others so you don’t have to reinvent the wheel.


Dark Web Monitoring For Law Firms: Is It Worthwhile?

One real value for a dark web scan is awareness. You should be able to obtain an initial dark web scan free of charge – without paying an ongoing monthly monitoring fee, which we certainly don’t recommend. The initial report will help identify if you have law firm employees that tend to reuse the same password across multiple sites. It may even identify sites you were not aware of so that you can immediately change the password. Use the dark web scan to educate employees at your next cybersecurity awareness training session. If you’re not teaching your employees about cybersecurity, at least annually, you are missing a very significant part of cyber resilience! A human element is involved in data breaches 82% of the time. Take control of your data and don’t hand it over to a monitoring service. You should be using a password manager and a unique password for each website or application you use. Put a freeze on your credit file at the three major credit bureaus. Freezing your credit file is free. 



Quote for the day:

"The test we must set for ourselves is not to march alone but to march in such a way that others will wish to join us." -- Hubert Humphrey

Daily Tech Digest - January 13, 2023

Poor cloud architecture and operations are killing cloud ROI

If the cloud did not ever have the potential to return ROI back to the business, nobody would use it. However, there are businesses that are very successful with cloud, even changing the business around the use of cloud computing. These companies are leveraging cloud as a true force multiplier to build innovative solutions, as well as to provide agility and scalability. However, many cannot find business value with cloud computing. Most disturbing, they are not finding value while spending about the same amount of money as those who are finding value. We must therefore conclude that bad decisions are being made. Cloud computing technology has been relevant for about 15 years. We understand it’s what you do and your company culture that makes you truly successful with cloud computing, not what you spend. Why are we still seeing winners and losers? ... First, bad architectures need to be fixed before they can operate properly. You can have a disciplined and highly automated operations team and technology stack, but if the solution is poorly designed, the result is going to be less than stellar, no matter what.


Innovation: Your solution for weathering uncertainty

Innovation has always been essential to long-term value creation and resilience because it creates countercyclical and noncyclical revenue streams. Paradoxically, making big innovation bets may now be safer than investing in incremental changes. Our long-standing research shows that innovation success rests on the mastery of eight essential practices. Five of these practices are particularly important today: resetting the aspiration based on the viability of current businesses, choosing the right portfolio of initiatives, discovering ways to differentiate value propositions and move into adjacencies, evolving business models, and extending efforts to include external partners. ... In times of disruption or deep uncertainty, companies have to carefully balance short-term innovations aimed at cost reductions and potential breakthrough bets. As customers’ demands change, overindexing on small product tweaks (that address needs which may be temporary) is unlikely to boost long-term performance. However, “renovations” to designs and processes can produce savings that help fund longer-term investments in innovations that may create routes to profitable growth.


The Truth About Cybersecurity Challenges Facing the Healthcare Industry

In general, healthcare IT has accrued technical debt for more than 25 years. Everywhere you look, whether it’s at the doctor’s office, hospital, or an urgent care facility, you see disparate and often dated IT systems. It’s not as rare as you’d think to see WindowsXP–based computers at the check-in desk and throughout the facility. Many of the most common pieces of equipment and attached computer systems run outdated operating systems, unpatched and archaic software, and have little security on them. I promise you it’s not for lack of trying by the IT and cyber-security team. So much outdated software exists largely because the vendors that support these systems focus on the healthcare aspect, rather than upkeep and security. In other instances, some devices were never intended to be connected to a network — thus rendering them vulnerable to remote attacks because they aren’t configured to be protected from network-based attackers. Finally, there is certainly some “if it ain’t broke, don’t fix it” mentality. Walking around you’ll find computer systems under people’s desks that have served a single purpose for a very long time. 


Time to Look at the Role of the CISO Differently

It is time to stop searching for non-existent profiles, expecting the CISO to be credible one day in front of the Board, the next in front of hackers, the third in front of developers, and all the way across the depth and breadth of the enterprise and its supply chain. Those profiles don’t exist anymore, given the transversal complexity cyber security has developed over the past two decades. The role of the CISO has to be one of a leader, structuring, organising, delegating and orchestrating work across their team and across the firm — and across the multiple third-parties involved in delivering or supporting the business. In essence, knowing what to do is reasonably well established and cyber security good practice — at large — still protects from most threats, and still ensures a degree of compliance with most regulations. But by focusing excessively on purely technical approaches to cyber security challenges, large organizations have failed to protect themselves effectively and efficiently, in spite of massive investments in that space over the last two decades.


MACH as an Enterprise Architecture strategy

MACH is an acronym for Microservices, API-first, Cloud-native, and Headless. It’s a modern approach for building and deploying software applications that can help organizations to be more agile, scalable, and flexible. In a MACH architecture, software applications are built as a collection of independent, self-contained microservices that communicate with each other through APIs (Application Programming Interfaces). The front-ends and back-ends components are separated and the entire solution is designed to be deployed in the cloud. ... There are several benefits of using a MACH architecture for building and deploying software applications:Agile development: MACH architectures allow different parts of an application to be developed and deployed independently, which can make it easier to make changes and updates without disrupting the entire system. This can help organizations be more agile and responsive to changing business needs. Scalability: MACH architectures are designed to be deployed in a cloud computing environment, which can provide the scalability and flexibility needed to support rapid growth or spikes in demand.


Maximizing data value while keeping it secure

Many organizations stumble and fail because they lack complete visibility into all data assets in clouds and beyond. To take visibility to a higher level, it’s vital to have a catalog of all managed and shadow assets, along with their owners, locations, security and governance measures enabled for the data. Without a central repository and a single view, there’s no way to know what data exists, how it’s stored, where it’s used and how it’s shared. Essentially, an organization winds up flying blind. Yet the advantages of robust discovery and visibility don’t stop there. With this information it’s possible to adapt and expand security profiles as needs and conditions change. ... Sharing data in the cloud involves complexity and risk. That’s a given. To maximize the opportunity—including harnessing the full functionality of cloud-native tools—an organization must know who is accessing data and how they are using it. Therefore, a robust identity management framework is crucial. Administrators and others must be able to analyze roles and permission settings in data assets that reside in clouds and across multi-cloud frameworks. 


Top automation pitfalls and how to avoid them

Automating a bad process can make things worse as it can magnify or exacerbate underlying issues, especially if humans are taken out of the loop. In some cases, a process is automated because the technology is there, even if automation isn’t required. For example, if a process occurs very rarely, or there’s a great deal of variation in the process, then the cost of setting up the automation, teaching it to handle every use case, and training employees how to use it may be more expensive and time-consuming than the old manual approach. And putting the entire decision into the hands of data scientists, who may be far removed from the actual work, can easily send a company down a dead end, or to end users who might not know how automation works, says James Matcher, intelligent automation leader at Ernst & Young. That recently happened at a company he worked with, a retail store chain with locations around the US. The retailer approached people on the front lines, and employees and managers working on the shop floors, for suggestions about manual processes that should be automated.


What’s the role of the CTO in digital transformation?

A CTO needs to take on the role of the ‘bridge builder’ between the strictly technical components of a transformation strategy and how they can apply to people and process in the specific context of an organisation. Digital transformation is a team activity. Each role needs to bring to the process their full insights and experience for the CTO to manage. The CTO has specific technological insight and therefore needs to be directly involved in helping the entire organisation identify where technical systems are simply obsolete and not fit for purpose so as well as being a bridge builder, CTOs naturally lead the charge when dealing with a technology-led approach. They must be able to explain where the value is in the application of technological change in context – too often we see visions that are de-contextualised from the reality on the ground. This kind of technological planning does not allow for realistic strategic planning. With visions of the ambitious but feasible in sight it is then the whole leadership team’s task to decide what course they are going to map out and to work together on the digital transformation journey.


How Organizations Should Respond to the CircleCI Security Incident

CircleCI has taken proactive steps to mitigate risk for its customers, but simply revoking secrets from the platform is not enough, according to Jaime Blasco, co-founder and CTO of cybersecurity company Nudge Security. “It’s still important to assume that every connected application and secret has been compromised. Organizations should verify the steps that these vendors have taken and also take steps to rotate secrets within any other connected application,” he explains. Customers can leverage commercially available or open-source tools, aside from the one offered by CircleCI, to discover their secrets. “One option is to use Trufflehog, an open-source tool that scans for secrets across multiple platforms, including CircleCI, Github, Gitlab, and AWS S3,” says Blasco. CircleCI is assuming responsibility and taking steps to protect its customers, Assaf Morag, lead data analyst at cloud native security company Aqua Security, notes. But is important for customers to respond proactively to the security incident as well. 


Artificial intelligence in strategy

Every business probably has some opportunity to use AI more than it does today. The first thing to look at is the availability of data. Do you have performance data that can be organized in a systematic way? Companies that have deep data on their portfolios down to business line, SKU, inventory, and raw ingredients have the biggest opportunities to use machines to gain granular insights that humans could not. Companies whose strategies rely on a few big decisions with limited data would get less from AI. Likewise, those facing a lot of volatility and vulnerability to external events would benefit less than companies with controlled and systematic portfolios, although they could deploy AI to better predict those external events and identify what they can and cannot control. Third, the velocity of decisions matters. Most companies develop strategies every three to five years, which then become annual budgets. If you think about strategy in that way, the role of AI is relatively limited other than potentially accelerating analyses that are inputs into the strategy. 



Quote for the day:

"Effective questioning brings insight, which fuels curiosity, which cultivates wisdom." -- Chip Bell