Daily Tech Digest - November 22, 2023

What It Means to Be a Software Architect — and Why It Matters

One of the misperceptions that architects face is that we are engaging in architecture for architecture’s sake, or that we propose new technologies mainly because of the “coolness” factor. Our challenge is to counter this misperception by arguing not merely for the aesthetic value of good design, but for the pragmatic, economic value. We need to frame the need for intentional design as something that can save the company significant costs by averting disadvantageous technology and design choices, producing a distinct competitive edge through market differentiation and paving the way for increased customer satisfaction. ... My commentary on some of Martin Fowler’s views of software architecture is not intended to paint a complete picture of this important role and how it differs from other types of architects. Rather, I’ve sought to highlight the importance of designing the structure of a system at the code level to ensure that the application of relevant patterns results in a design that can sustain cumulative functionality over time, increasing business value while reducing time to market.


5 Ways to Supercharge Incident Remediation with Automation

It’s a balance between your confidence in the automation, the value or cost of the incident and the frequency the task occurs. Common incidents with proven automated steps for diagnosis and remediation are good opportunities to trigger with AIOps. From there, follow a similar process to prioritize your incident response. Automate diagnosis and remediation steps for serious outages to speed resolution. Then focus on increasing efficiency by automating recurring diagnostics and remediation actions that occur across many kinds of incidents. You can safely automate and trigger lower-risk actions such as read-only diagnostic pulls with AIOps, giving downstream personnel the information they need, even when they are paged. You can automate common remediation actions and make them available to responders to use. This automation can utilize secrets management tools such as Vault to enable privileged actions in production environments without sharing credentials, making it safer to delegate to responders. 


Tech Pros Quitting Over Salary Stagnation, Stress

Gartner Vice President Analyst Lily Mok told InformationWeek via email CIOs should work with their recruitment and compensation teams to identify IT roles and skills areas facing higher attrition risk and recruitment challenges due to noncompetitive compensation. “This will help pinpoint additional funding will be needed in the short term to address pay gaps,” she says. “Organizations with limited financial resources should prioritize allocating increases to high-risk areas.” She also recommends conducting spot-checks of the market pay conditions on at least a quarterly basis and updating pay benchmarks for key IT roles and skills areas with more recent data. “At very least, I would recommend annual review of market pay levels for key IT jobs and skills area,” Mok adds. ... Another suggestion is to create a separate salary structure for IT, an approach that helps avoid force-fitting IT jobs into enterprise wide pay grades that often place a higher weight on internal equity than external competitiveness when valuing jobs across different functions.


Unlocking Cyber Resilience: The Role Of SBOMs In Cybersecurity

Implementing an SBOM strategy is a step towards fortifying your cybersecurity defenses. While having a list of the components that make up your software supply chain is better than not having one, context is also crucial. You don’t just want to know that you have a given code module—but all of the associated data as well. Vulnerabilities and exploits tend to effect specific versions, so you need to know the details of the versions in your environment, the year and date the code was released, where and how the code is used, etc. Automation is essential. It’s impractical, bordering on impossible to try and manage or maintain an accurate SBOM through any manual process. By automating the SBOM generation and maintenance, the margin for human error diminishes, the speed of response accelerates, and organizations can scale their security practices as they grow. Compliance is another piece of the puzzle. Your SBOM solution should align with industry standards and regulatory requirements, ensuring that you aren't just secure, but also compliant. 


CISOs can marry security and business success

While businesses aim for different outcomes, one goal that the business typically prescribes for cybersecurity is business continuity. This is probably due to most executives viewing cybersecurity only as an operational necessity. At the same time, they fail to see cybersecurity’s essential contribution to the due diligence aspect of the procurement process. The complexity and length of procurement processes have increased over the years, as prospective clients use this as part of their third-party risk management. Executives that are aware of clients’ needs can use them to improve the cybersecurity of the organization and its offerings, by translating them into features that will raise the offering’s competitive advantage. Traditionally, R&D and innovation teams perceive the CISO’s role as an obstacle to innovation and advancement. Conventional security entities frequently resort to phrases like “this can’t be done due to security protocols,” obstructing changes to existing infrastructure and impeding innovation. If security is confined to an IT concern rather than recognized as a business imperative, CISOs struggle to emerge as strategic partners.


Advanced Applications of Open-Source Technologies

The Evolution of Open-Source Culture The widespread adoption of open-source technologies is attributed to the culture and philosophy underpinning the open-source movement. Early pioneers in the open-source community championed the belief in the transformative power of collaborative, community-driven efforts and unrestricted access to software source code. For young developers exploring careers, open source presents exciting opportunities. Contributing to open-source projects enables developers to hone their skills, gain visibility, and engage with mentorship from experienced professionals. ... Demonstrated by Brazil’s Amazonia-1 satellite program, Julia is instrumental in in-orbit sensor calibration, showcasing its adaptability beyond conventional software development. NASA, a leader in space exploration, also utilises Julia for various purposes, including gaining insights into the intricacies of Earth’s oceans. This strategic adoption of open-source technology highlights its pivotal role as more than just a developer’s tool, serving as a crucial enabler to tackle real-world challenges on a global scale.


Generative AI is a developer's delight. Now, let's find some other use cases

"We aren't surprised that the most common application of generative AI is in programming, using tools like GitHub Copilot or ChatGPT," Mike Loukides, author of the O'Reilly report, writes. "However, we are surprised at the level of adoption." There is also evidence of a healthy tools ecosystem that has already sprung up around generative AI, the report indicates. ... "Automating the process of building complex prompts has become common, with patterns like retrieval-augmented generation (RAG) and tools like LangChain. And there are tools for archiving and indexing prompts for reuse, vector databases for retrieving documents that an AI can use to answer a question, and much more. We're already moving into the second generation of tooling." ... "Programmers have always developed tools that would help them do their jobs, from test frameworks to source control to integrated development environments. Programmers will do what's necessary to get the job done, and managers will be blissfully unaware as long as their teams are more productive and goals are being met."


In the symphony of enterprise, every business today dances to the silent tune of technology

Not all AI applications have had a positive impact – content writing and the media industry are the worst hit. It was widely believed that creative industries will be the last to be impacted by technologies like AI however the ground realities are very different. One of the demands of the striking Writers Guild of America was that AI will not encroach on writers’ credits and compensation. No matter the core product, all functions of an organization are now utilizing technology in some form or manner – planning, organizing, analysis, marketing, sales, customer engagement or service. Technology has always been a catalyst for progress, often propelling non-tech companies into new realms of efficiency and cost-effectiveness. From the industrial revolution’s steam engines to the digital age’s computers, companies outside the technology sector have harnessed innovation to transform their operations. Today, the conductor of this transformative orchestra is artificial intelligence (AI) and the darling subset – Generative AI.


5 pillars of a cloud-conscious culture

“A developer shouldn’t just provision an extra-large server and then leave it running,” says Firment. “Coders have to learn to work in a cloud native way. That requires understanding terms like elasticity, scalability, and resiliency. They need to know what we mean by multiple availability zones. Developers can still leverage their skills in the cloud, but they just have to apply them in the new way.” Building a culture is like building a tribe, and certificates are a good marker of the new tribe. They create a sense of belonging. Rituals are equally important. “As individuals get certified, create a cloud of fame,” says Firment. “That’s a great way to say you value people who develop the skills. And it’s an artifact of the new culture.” Celebrating certification is also highly effective. “Establish a weekly or monthly cloud hour, where people share what they’re learning on the way to getting certified,” he says. “Ultimately, they should share how they’re applying the knowledge and customer success stories. Storytelling is a big part of creating a culture.”


The SSO tax is killing trust in the security industry

Before some of these solutions are adopted, there are steps we can all take. If you are responsible for identity and access management at an organization, have you audited the authentication tokens you rely on to ensure they operate as expected? Have you considered what compensating controls you could put in place? Are there security products that can do that auditing for you or otherwise mitigate this risk in your environment? Do the security questionnaires your company sends to potential SaaS application providers ask how they configure authentication tokens? It is going to take a serious collaborative "security by design" effort between SSO providers, application developers, and browser companies to repair the broken SSO environment we currently operate under. We single out application providers for criticism in this article because they so often charge an upgrade fee to integrate with SSO. If they are going to charge us a tax, they need to step up or share in the blame for the compromises that will continue to happen. 



Quote for the day:

"Success consists of getting up just one more time than you fall." -- Oliver Goldsmith

Daily Tech Digest - November 21, 2023

How to apply design thinking in data science

Observing end-users and recognizing different stakeholder needs is a learning process. Data scientists may feel the urge to dive right into problem-solving and prototyping but design thinking principles require a problem-definition stage before jumping into any hands-on work. “Design thinking was created to better solutions that address human needs in balance with business opportunities and technological capabilities,” says Matthew Holloway, global head of design at SnapLogic. To develop “better solutions,” data science teams must collaborate with stakeholders to define a vision statement outlining their objectives, review the questions they want analytics tools to answer, and capture how to make answers actionable. Defining and documenting this vision up front is a way to share workflow observations with stakeholders and capture quantifiable goals, which supports closed-loop learning. Equally important is to agree on priorities, especially when stakeholder groups may have common objectives but seek to optimize department-specific business workflows.


The role of big data in auditing and assurance services

Conventionally, audit judgements rely on sole evidence sourced from structured datasets in an organization’s financial records. But technological advances in data storage, processing power and analytic tools have made it easier to obtain unstructured data to support audit evidence. Big data can be used for prediction by using a complex method of analytics to glean audit evidence from datasets and other sources which encompass organizations, industries, nature, internet clicks, social media, market research and numerous other sources. ... An innovative system will not only enable the application of artificial intelligence embedded Natural Language Processing (NLP) to streamline unstructured data but also ensure its integration with an Optical Character Recognition (OCR). These capabilities and other new cutting-edge technologies will effectively help to convert both structured and unstructured data into meaningful insights to drive audit. Thus, the use of big data is to make it easier to eliminate human errors, flag risks in time and spot fraudulent transactions and, in effect, modernize audit operations, thereby improving the efficiency and accuracy of the financial reporting process.


Operators unprepared for high gains from low-power IoT roaming

A key feature supported by the latest technology is passive or ambient IoT, which aims to connect sensors and devices to cellular networks without a power source and that could dramatically increase the number of cellular IoT devices. This facet is increasingly becoming appealing to several enterprise verticals. NB-IoT and LTE-M are backed by major mobile operators, offering standardised connectivity with global reach. Yet Juniper warned that a key technical challenge faced by operators is their inefficiency in detecting low-power devices roaming on their networks, meaning that operators lose potential revenue from these undetected devices. Due to their low data usage and intermittent connectivity, these devices require constant network monitoring to fully maximise roaming revenue. ... “Operators must fully leverage the insights gained from AI-based detection tools to introduce premium billing of roaming connections to further maximise roaming revenue,” said research author Alex Webb. “This must be done by implementing roaming agreements that price roaming connectivity on network resources used and time connected to the network.”


How to improve cyber resilience by evaluating cyber risk

The biggest challenge in evaluating cyber risk is that we always underestimate it. The impact is almost always worse than what was estimated. A lot of us are professional risk mitigators and managers, and we still get it wrong. Going back to the MGM Resorts cyber attack, I refuse to believe that MGM believed that their ransomware breach was going to cost them US$1 billion in between lost revenues, lost valuation and loss of confidence from both the market and customers. That, to me, is the biggest issue. There is a huge gap there. Even though there are a lot of numbers surrounding the cost of a data breach, they still all significantly underestimate it. So that to me, I think is the biggest area. ... We are spending a lot of time talking about the tools that these actors use, whether it is artificial intelligence (AI), ransomware, hacking, national security threats and so on. To make an impact against this threat we must focus on resilience and what you can tolerate, then understanding what you can withstand and what conditions you can withstand them under. 


What Sam Altman's move to Microsoft means for ChatGPT's future: 3 possible paths forward

Microsoft acquires what's left of OpenAI and kicks OpenAI's current board of directors to the curb. Much of OpenAI's current technology runs on Azure already, so this might make a lot of sense from an infrastructure point of view. It also makes a lot of sense from a leadership point of view, given that Microsoft now has OpenAI's spiritual and, possibly soon, technical leadership. Plus, if OpenAI employees were already planning to defect, it makes a lot of sense for Microsoft to simply fold OpenAI into the company's gigantic portfolio. I think this may be the only practical way forward for OpenAI to survive. If OpenAI were to lose the bulk of its innovation team, it would be a shell operating on existing technology in a market that's running at warp speed. Competitors would rapidly outpace it. But if it were brought into Microsoft, then it can keep moving at pace, under the guidance of leadership it is already comfortable with, and continue executing on plans it already has.


Kaspersky’s Advanced Persistent Threats Predictions for 2024

Botnets are typically more prevalent in cybercrime activities compared to APT, yet Kaspersky expects the latter to start using them more. The first reason is to bring more confusion for the defense. Attacks leveraging botnets might “obscure the targeted nature of the attack behind seemingly widespread assaults,” according to the researchers. In that case, defenders might find it more challenging to attribute the attack to a threat actor and might believe they face a generic widespread attack. The second reason is to mask the attackers’ infrastructure. The botnet can act as a network of proxies, but also as intermediate command and control servers. ... The global increase in using chatbots and generative AI tools has been beneficial in many sectors over the last year. Cybercriminals and APT threat actors have started using generative AI in their activities, with large language models explicitly designed for malicious purposes. These generative AI tools lack the ethical constraints and content restrictions inherent in authentic AI implementations.


Alternative data investing: Why connected vehicle data is the future

One of the most promising subsectors of the alternative data realm is geolocation, standing at an impressive valuation of $400 million. Geolocation is prized for its ability to correlate ground-level activities to consumer trends, business health and revenue. But within this sphere, the real game-changer is ‘connected vehicle’ data. Connected vehicle data, a subset of geolocation, is an invaluable resource for investors. It enables analysis of both passenger car and truck activities across almost any location. This opens a window into consumer trends, helping investors decipher current demand dynamics before company earnings calls. Moreover, tracking truck activity provides insights into a company’s supply chain health. By monitoring truck traffic at key economic areas – be it manufacturing facilities, warehouses, distribution centers or seaports – investors can gauge a company’s production, distribution and supply chain efficiencies. This level of detail can provide a holistic view of a company’s operations and its future revenue potential.


The Potential Impact of Quantum Computing on Data Centre Infrastructure

All kinds of use cases involving complex algorithms are the candidates for being addressed using quantum computers. Use cases around financial modelling and risk analysis at macro level, environmental analysis, and climate modelling especially while undertaking the development projects in ecological sensitive areas, supply chain optimisation, life sciences, AI based drug discovery / drug repurposing, custom treatment for complex disease etc would be the candidates for using quantum computing in data centres. Apart from the above, one key area where quantum computing will impact everybody is deep fake. In a very short time, the generative AI has shown its capability of creating fake videos of anybody, with little training material. ... Quantum computing will play key role in providing the required infrastructure support implementation of algorithms which will be able to identify such fake videos and stop them before they become viral and create law & order problems in societies. Players like Facebook (including WhatsApp), Instagram have strong usage requirements of quantum computing to address the menace of fake news and fake videos.


7 steps for turning shadow IT into a competitive edge

A formalized and transparent prioritization process is also important. CIOs need a way to capture lightweight business cases or forecast business value to help prioritize new opportunities. At the same time, CIOs, CISOs, and compliance officers need to establish a risk management framework to quantify when shadow IT creates business issues or significant risks. CIOs should partner with CFOs in this endeavor because when departments procure their own technologies without IT, there are often higher procurement costs and implementation risks. CIOs should also elicit their enterprise architect’s guidance on where reusable platforms and common services yield cost and other business benefits. “Shadow IT often wastes resources by not generating documentation for software that would make it reusable,” says Anant Adya, EVP at Infosys Cobalt. “Insightful and far-reaching governance coupled with detailed application privileges discourage shadow IT and helps build collaborative operating models.” Creating technology procurement controls that require CIO and CISO collaboration on technology spending is an important step to reduce shadow IT.


Running Automation Tests at Scale Using Cypress

With Cypress, teams can easily create web automation tests, debug them visually, and automatically run them in the CI/CD pipelines, thus helping with Continuous Integration (CI) and development. Though Cypress is often compared with Selenium WebDriver, it is fundamentally and architecturally different. It does not use Selenium WebDriver for automation testing. Thus enabling users to write faster, easier, and more reliable tests. The installation and setup part of Cypress is also easier compared to other test automation frameworks as it is a node package. You just need to run the npm command npm install cypress and then use the Cypress framework. ... Cypress offers some out-of-box features to run automation tests at scale. Travel:- Cypress allows you to "time travel" through the web application, meaning it will check out what happens at each step when the tests are executed. We can step forward, backward, and even pause the test execution on run time. Thus providing us the flexibility to inspect the application’s state in real-time test execution. Auto Waits:- Cypress has the inbuilt auto wait functionality that automatically waits for the command and assertion before moving to the next step.



Quote for the day:

"Success is how high you bounce when you hit bottom." -- Gen. George Patton

Daily Tech Digest - November 20, 2023

6 most underhyped technologies in IT — plus one that’s not dead yet

Although AI gets all the attention, the key components that make it work often do not, including data. Yet as organizations eagerly embrace AI in all its forms, many have neglected parts of their data management needs, says Laura Hemenway, president, founder, and principle of Paradigm Solutions, which supports large enterprise-wide transformations. Even those who are on top of data management often downplay the powerful work their data management tools do. As such, Hemenway thinks data management software deserves more recognition for the important job it does, even as the work involved is often considered a tedious task that doesn’t have the pizzazz of making the most of ChatGPT. Still, sound data management is a linchpin for AI and other analytics work, which underpins a whole host of processes deemed critical in modern business  ... But with no big breakthroughs, interest fizzled and the metaverse found itself on some overhyped tech lists. But don’t be so quick to write it off, warns Taylor, who thinks this category of tech has been unfairly downgraded, which lands it on his list of underhyped technologies.


How to Sell A Technical Debt From a DevOps Perspective?

In the course of my journey, I have formed 3 categories of business motivation for "buying" technical debt: "Fat" indifference - When there's a rich investor, the CEO can afford the development team of weird geeks. It is like, "Well, let them do it! The main thing is to get the product done, and the team spirit is wow, everything is cool, and we'd be the best office in the world". ... Fear - This is one of the most effective, widespread, and efficient models for technical debt. What kind of "want" can we talk about here when it's scary? It's about when something happens like a client left because of a failure or a hack. And it's all because of low quality, brakes, or something else. But bluntly selling through fear is also bad. Speculation with fear works against trust. You need to sell carefully and as honestly as possible. Trust - It is when a business gives you as much time as you need to work on technical debt. Trust works and is preserved only when you are carefully small to the total share and as pragmatic as possible in taking this time. Otherwise, trust is destroyed. Moreover, it does not accumulate. A constant process goes in waves: trust increases and then fades.


Defending Logistics After Cyberattack on DP World Australia

Ransomware is obviously more than a pricey nuisance for companies. “The costs are like millions of dollars for each attack,” Austin says. While businesses often acknowledge that supply chain security and data protection are important priorities, there can be challenges acting on those fronts. “The problem is a lot of them suffer from understaffing,” he says. “They don’t have enough people and logistics, and so they’re struggling with that.” There is a presumption of smooth operations across the supply chain but cyberattacks and other disruptions can deliver wakeup calls. “Prior to the pandemic, a number of companies never realized how important a well-functioning supply chain is, how much they matter,” Austin says. The rise of the pandemic saw cargo getting backed up at various ports around the world, disrupting access and delivery of goods. The cyberattack on DP World Australia was a reminder that intentional targeting by bad actors can also put the supply chain in a chokehold. It is debatable how disconnecting and then later reconnecting to the internet affected the situation DP World Australia faced.


Only 9% of IT budgets are dedicated to security

With rising risk and shrinking resources, the message is clear: businesses need new methods to improve their security. Compounding the urgency is ever-evolving global regulation and the growing time-suck of complying with an increasing number of standards. Organizations are at an impasse in an environment where customers want more insight into a company’s security practices. Two-thirds say that customers, investors and suppliers are increasingly seeking proof of security and compliance. While 41% provide internal audit reports, 37% third-party audits, and 36% complete security questionnaires, 12% admit they don’t or can’t provide evidence when asked. That means companies worldwide are falling at the very first hurdle – costing them potential revenue and growth opportunities in new markets. Businesses spend an average of 7.5 hours per week – more than 9 working weeks a year – on achieving security compliance or staying compliant. 54% are concerned that secure data management is becoming more challenging with AI adoption with 51% saying that using generative AI could erode customer trust.


The Power of Preference in the Wake of Privacy Regulations

Providing customers with autonomy to dictate their own data-sharing preferences isn’t just a legal obligation; it’s also a key way to improve trust, establish transparency and strengthen brand loyalty. Additionally, teams can use this highly personalized data to tailor their marketing efforts, so they’re only serving up content and communications that are the most relevant to individual customers. As such, business leaders shouldn’t feel hindered or restricted by legal requirements like Law 25. Instead, it should challenge businesses to consider this renewed emphasis on consumer autonomy as a positive development. This is especially true for companies that deal with our most sensitive data (i.e. financial and health information). Beyond these updated privacy regulations, financial services and healthcare providers could face serious legal repercussions if customer and patient information is obtained without consent or ends up in the wrong hands. Developing a consumer-centric strategy anchored on up-to-date preferences is therefore an absolute necessity.


How To Attract Premium Clients And Charge Accordingly — Even During Market Instability

In the e-commerce marketing world, we often hear that we need to speak to the client's pain points — to amplify that fear so people are motivated to buy — but we advise against using this common method. If we are constantly speaking to that disillusioned version of our client, it takes us longer to scale a business. It means we have to drag, convince and educate. Instead, elevate the quality of clients you are attracting. Avoid using fear-based marketing to sell. Out of our sample size of 300-plus clients in a variety of sectors, just by shifting the language, the quality of the client improved 100% of the time. The future of marketing is to speak to the empowered version of your client because today's consumer is more sophisticated than ever. When you talk to that client, you're attracting clients who are resourceful and willing to bet on themselves and see their value. You'll elevate the type of clients you attract, and they're willing to invest more. ... Price the services you offer based on the value you bring to the table, specifically on the lifetime value that it will provide to the client. 


Powering a Greener Future: How Data Centers Can Slash Emissions

As data and analytics have inarguably become the fuel of business success, the rise of data centers is outpacing our ability to mitigate the resultant carbon emissions. If data industry leaders don’t seek new methods of carbon reduction and embrace more energy-efficient processing, the costs will quickly become insurmountable. Thankfully, companies are increasingly setting specific carbon emission targets, either because of their own environmental, social, and governance (ESG) goals or due to legal requirements or regulations. In fact, these targets may even be good for business. A recent McKinsey study found that companies with products with ESG-related claims saw 8% more cumulative growth than companies that did not associate their products with ESG. A recent poll of American consumers found that, despite inflation, 66% of consumers are willing to pay more for sustainable products and services. Many organizations already aim to have net-zero emissions by 2050, but most are focusing on alternative and renewable energies, which is good but insufficient because it misses the core of the problem: the overconsumption of energy in the data center due to misused infrastructure.


Three Causes of Cloud Migration Failure in Large Enterprises

Cloud migration is not a simple lift-and-shift operation; it involves myriad complexities that demand careful consideration. Underestimating these complexities is a significant pitfall that can lead to costly failures in large enterprise cloud migrations. Transferring vast amounts of data while ensuring seamless integration with existing systems is a significant challenge. Not all applications can seamlessly transition to the cloud. Some require considerable reconfiguration or redevelopment. Ensuring data security and compliance with regulatory standards is complex, with varying requirements across industries and regions. It can be intricate to optimize performance in the cloud, including network latency and resource allocation, and daunting to track and control cloud costs amid scalability and resource provisioning complexities. ... Employee resistance to change is a critical factor that can make or break a cloud migration initiative in large enterprises. In fact, industry leaders emphasize that employee resistance to change is the primary reason for enterprise cloud migration failures.


A Detection and Response Benchmark Designed for the Cloud

Operating in the cloud securely requires a new mindset. Cloud-native development and release processes pose unique challenges for threat detection and response. DevOps workflows — including code committed, built, and delivered for applications — involve new teams and roles as key players in the security program. Rather than the exploitation of traditional remote code execution vulnerabilities, cloud attacks focus more heavily on software supply chain compromise and identity abuse, both human and machine. Ephemeral workloads require augmented approaches to incident response and forensics. While identity and access management, vulnerability management, and other preventive controls are necessary in cloud environments, you cannot stay safe without a threat detection and response program to address zero-day exploits, insider threats, and other malicious behavior. It's impossible to prevent everything. The 5/5/5 benchmark challenges organizations to acknowledge the realities of modern attacks and to push their cloud security programs forward.


Are Business Continuity Plans Still Relevant?

The successful organizations focused on building teams that were adept at proactively responding to near and longer-term challenges. The less successful were reactionary, starting by executing procedures in plans that focused on short-term outcomes. Taken one step further, those organizations that really knew what it took to deliver products and services, how they reached their customers and suppliers, and the relationship between processes, resources, and third parties were able to better respond and prevent disruption or other forms of unacceptable impact. ... When you lack a full picture view of your business operations and go-to-market strategy, dependencies and interdependencies are often overlooked. Developing and maintaining a digital model of your organization, its products/services, and business processes offers a valuable resource to query. This digital model gives you an end-to-end perspective on your operations, which is invaluable for assessing vulnerabilities like identifying and treating critical single points of failure or those parts of the business without a recovery strategy, addressing change management, and making better business decisions.



Quote for the day:

"Positive thinking will let you do everything better than negative thinking will." -- Zig Ziglar

Daily Tech Digest - November 19, 2023

OpenAI jettisons CEO Sam Altman for dishonesty

Board Chairman Greg Brockman will also leave his role as a result of the shakeup. OpenAI initially stated that Brockman would remain in his role as president, but shortly after the announcement of the reshuffling, he announced that he would quit the company entirely. The company’s CTO, Mira Murati, will take over as interim CEO, according to OpenAI, and the board will begin to conduct a formal search for a permanent replacement. Brockman will report to Murati, the company noted. Ritu Jyoti, group vice president for worldwide AI and automation research at IDC, called the shakeup “astonishing,” noting that Altman had been present at many public events, even in the days before his ouster, and that there was no sense of trouble in the offing for the CEO. “I’m sure the board has taken the right decision,” she said. “But it’s quite unexpected.”" Brockman and Murati were both promoted to their former roles in May 2023. Brockman has been closely involved in both broad strategy for the company as well as personal coding contributions, and has focused on training “flagship AI systems” since assuming the role of president.


Microsoft and Meta quizzed on AI copyright

The Lords were keen to hear what the two experts thought about open and closed data models and how to balance risk with innovation. Sherman said: “Our company has been very focused on [this]. We’ve been very supportive of open source along with lots of other companies, researchers, academics and nonprofits as a viable and an important component of the AI ecosystem.” Along with the work a wider community can provide in tuning data models and identifying security risks, Sherman pointed out that open models lower the barrier to entry. This means the development of LLMs is not restricted to the largest businesses – small and mid-sized firms are also able to innovate. In spite of Microsoft’s commitment to open source, Larter appeared more cautious. He told the committee that there needs to be a conversation about some of the trade-offs between openness and safety and security, especially around what he described as “highly capable frontier models”. “I think we need to take a risk-based approach there,” he said. 


The deepfake dilemma: Detection and decree

Deepfake technology poses significant challenges in legal proceedings, particularly in criminal cases, with potential repercussions on individuals' personal and professional lives. The absence of mechanisms to authenticate evidence in most legal systems puts the onus on the defendant or opposing party to contest manipulation, potentially privatizing a pervasive problem. To address this, a proposed rule could mandate the authentication of evidence, possibly through entities like the Directorate of Forensic Science Services, before court admission, albeit with associated economic costs. In India, existing laws offer some recourse against deepfake issues, but the lack of a clear legal definition hampers targeted prosecution. The evolving nature of deepfake technology compounds the challenges for automated detection systems, leading to increased difficulty, particularly in the face of contextual complexities. This poses a significant threat to legal proceedings, potentially prolonging trials and heightening the risk of false assumptions.


The data skills gap keeps getting bigger. Here's how one company is filling it

The key message, says Moore, is that Bentley's apprenticeship program is adding data talent to the IT skills pool and it's helping to foster a community of like-minded professionals. "We've got a group of people who've gone through the program. It's not just one school leaver going to a corner of an office and not being sure how to fit in," he says. "I've now got people on first, second, third, and fourth years of degrees. That depth of talent means there's always someone to ask for help and support. There's a great community." Moore says Bentley recruits the students and then places them with a university. "I'm keen first and foremost that they're coming to Bentley and they're adding value. They're connecting with the brand, sustainability, and inclusivity that we offer -- and a lot of people are resonating with that idea." When it comes to recruiting talent for the program, Moore says he's put a lot of effort into going to schools. He typically looks for a solid grade in math, as both the degree and the work at Bentley are "stats-heavy".


What are the Benefits of DevOps?

CI/CD is a cornerstone of DevOps. Continuous integration involves frequently integrating code changes into a shared repository, where automated tests are run to catch integration issues early. Continuous Deployment takes this a step further by automating the deployment of code changes to production environments. This automated pipeline reduces the risk of human error and ensures that the latest code is always available to users. ... DevOps encourages a focus on quality and reliability throughout the development process. Automated testing, part of the CI/CD pipeline, ensures that new code changes do not introduce regressions or bugs. This leads to more stable and reliable software, as issues are caught and resolved before they reach production. ... DevOps heavily relies on automation tools to streamline processes. Infrastructure as Code (IaC) treats infrastructure setup as code, allowing developers to define and manage infrastructure using version-controlled files. This approach eliminates manual setup, reduces inconsistencies and accelerates environment provisioning.


A Roadmap to True Observability

Ultimately, true observability empowers teams to deliver more reliable, responsive, and efficient applications that elevate the overall user experience. In order to achieve "true" observability, it's important to understand the Observability Maturity Model. This model outlines the stages through which organizations evolve in their observability practices, acting as a roadmap. Here, we'll describe each maturity stage, highlight their advantages and disadvantages, and offer some practical tips for moving from one stage to the next. ... After understanding the Observability Maturity Model, it's essential to explore the multifaceted approach companies must embrace for a successful observability transition. Despite the need to adopt advanced tools and practices, the path to "true" observability can demand significant cultural and organizational shifts. Companies must develop strategies that align with the observability maturity model, nurture a collaborative culture, and make cross-team communication a priority. The rewards are quite substantial — faster issue resolution and improved user experience, making "true" observability a transformative journey for IT businesses.


Make Your Dev Life Easier by Generating Tests with CodiumAI

Many developers still claim they don’t like writing tests, so the idea of generating them using AI was always going to appeal. Of course, tests are not necessarily a secondary issue — with Test-driven development (TDD) you write the tests first. While it is a good thing every team should try, common practice sees the creation of a minimum viable product (MVP) first, and when there is some evidence this has a future, to then continue the project with full unit testing. I had not seen CodiumAI before, but as it has generated an easy-to-understand elevator pitch — “Generating meaningful tests for busy devs” — right on its front page, I’m immediately ready to give it a go. Clearly, they are trying to jump into the tool space opened up by OpenAI’s GPT models — indeed, it is “powered by GPT-3.5&4 & TestGPT-1”. CodiumAI’s implementation only works with Visual Studio Code and JetBrains for now; I’ll use the former. I find VSC a bit awkward, but at least it should be happy with C#, my preferred tipple. But I know JetBrains tools are extremely popular too.


Meta disbanded its Responsible AI team

According to the report, most RAI members will move to the company’s generative AI product team, while others will work on Meta’s AI infrastructure. The company regularly says it wants to develop AI responsibly and even has a page devoted to the promise, where the company lists its “pillars of responsible AI,” including accountability, transparency, safety, privacy, and more. ... He added that although the company is splitting the team up, those members will “continue to support relevant cross-Meta efforts on responsible AI development and use.” ... RAI was created to identify problems with its AI training approaches, including whether the company’s models are trained with adequately diverse information, with an eye toward preventing things like moderation issues on its platforms. Automated systems on Meta’s social platforms have led to problems like a Facebook translation issue that caused a false arrest, WhatsApp AI sticker generation that results in biased images when given certain prompts, and Instagram’s algorithms helping people find child sexual abuse materials.


Ransomware gang files SEC complaint against company that refused to negotiate

It will be interesting to see how the SEC reacts to the possibility of ransomware gangs taking advantage of its rules and complaint facility to blackmail victims and whether the agency will be more lenient with how it enforces the new disclosure requirements in the beginning. “This puts added pressure on publicly traded MeridianLink after claiming to have breached its network and stolen unencrypted data,” Ferhat Dikbiyik. “This move has blindsided the industry and raised questions about the effectiveness of the new SEC rules in the fight against cybercrime. It also begs the question: Does ALPHV have affiliates within the US?” ... “While shocking to many, the reports that BlackCat tattled on one of their victims to the SEC isn't surprising in the ever-evolving ransomware economy,” Jim Doggett, CISO of cybersecurity firm Semperis, tells CSO. “Some will argue that BlackCat's move is opportunistic at best, and they are motivated only by greed to force quicker payments by victims. Others will say that this aggressive move could leave the group in the crosshairs of US law enforcement agencies. At the end of the day, the ransomware gangs are criminal organizations, and their only motive is profits.”


The Next Leap in Battery Tech: Lithium-Ion Batteries Are No Longer the Gold Standard

Unfortunately, the natural SEI is brittle and fragile, resulting in poor lifespan and performance. Here, the researchers have looked into a substitute for natural SEI, which could effectively mitigate the side reactions within the battery system. The answer is ASEI: artificial solid electrolyte interphase. ASEI corrects some of the issues plaguing the bare lithium metal anode to make a safer, more reliable, and even more powerful source of power that can be used with more confidence in electric vehicles and other similar applications. ... The future of the ASEI layers is bright but calls for some improvements. Researchers mainly would like to see improvement in the adhesion of the ASEI layers on the surface of the metal, which overall improves the function and longevity of the battery. Additional areas that require some attention are stability in the structure and chemistry within the layers, as well as minimizing the thickness of the layers to improve the energy density of the metal electrodes. Once these issues are worked out, the road ahead for an improved lithium metal battery should be well-paved.



Quote for the day:

"When you expect the best from people, you will often see more in them than they see in themselves." -- Mark Miller

Daily Tech Digest - November 18, 2023

What You Need to Know About Securing 5G Networks and Communication

IoT devices have exploded over the past several years, and this growth shows no signs of slowing down. And all of these devices have one thing in common: Remote connectivity via a public 4G or 5G network, or, increasingly, a private 5G network. This explosion of connected devices creates an expanded attack surface, since the entire network is only as secure as its weakest link. Specifically, just because a network is secure, any devices attached to it that are not secure in how they communicate or receive updates create a breach opportunity. As a result, it’s essential that every device has an identity and each identity is managed. This might sound daunting, but it’s not as complex as it seems at first – it goes back to the building blocks of PKI. Much of the security industry has a handle on running PKI for enterprise networks in their organization (think laptops, mobile devices, and so on). Therefore, security teams are also enabled to do PKI for these smart devices — it’s the same approach for a different endpoint.


To AI Hell and Back: Finding Salvation Through Empathy

Iannopollo said the guides assisting in AI Hell could come from IT, marketing, or the executive team. “All of them understand the incredible opportunity of generative AI and the unparalleled transformative power of the new technology. And they know that without adequate security, privacy, and risk governance.” According to Forrester’s research, 36% of respondents in those groups said privacy and security are the greatest barriers to generative AI adoption, while another 31% said governance and risk were the biggest hurdle. Another 61% cited concerns that GenAI could violate privacy and data protection laws like the EU’s GDPR. “So, concerns exist,” she said. “But remember, Hell is a place of confusion.” As more frameworks come online -- more regulations, there may be less confusion and the guides will help businesses assess their AI adoption. ... Once you are out of AI Hell, like Dante, your story is not complete. Dante had to first stop in purgatory. And after spending time in AI Hell dealing with the questions of risk and threats, businesses will need to figure out a compliance strategy.


Conceptual vs. Logical vs. Physical Data Modeling

“Companies need to do Data Modeling to solve a specific business problem or answer a business question,” summarized Aiken. IT and businesses need to share goals and understanding to get to a data solution. Moreover, there needs to be a common language between systems for data to flow smoothly. However, slapping together any model or a big overarching enterprise architecture will not be helpful. A data model needs to achieve a particular purpose, and getting there requires a systematic process. Aiken’s three-dimensional model evolution framework provides resources for an improved data platform. It considers the existing architecture and the evolution needed to meet business needs and validates that stakeholders and builders are on the same page. A combination of conceptual, logical, and physical data models promises meaningful and useful results, especially where business and IT need to achieve a common objective. Doing the data modeling correctly and understanding requirements frees up 20% time and money for corporations to leverage their data capabilities and get more value from them.


AI: The indispensable ally in the information age

The implementation of AI in data centers must be viewed through a dual lens: risk mitigation and knowledge preservation. As we face a generational turnover in expertise within the industry, with a significant proportion of seasoned professionals retiring, there's an urgent need to capture and transfer this wealth of knowledge. AI and machine learning algorithms, when correctly trained and utilized, can play a crucial role in bridging this knowledge gap. By learning from clean data, and benchmarking and decisions made by experienced personnel, AI systems can emulate, and eventually, enhance these expert-driven processes. This transfer of knowledge is vital not just for maintaining current operational standards, but also for paving the way for more advanced, efficient, and resilient data center architectures. Moreover, AI's potential in managing and reducing operational risks in data centers is monumental. Advanced predictive analytics can foresee and mitigate potential failures, while continuous monitoring AI systems can identify anomalies that hint at future problems, allowing for preemptive maintenance and risk aversion.


Shadowy Hack-for-Hire Group Behind Sprawling Web of Global Cyberattacks

The cybersecurity firm's exhaustive analysis of data that Reuters journalists collected showed near-conclusive links between Appin and numerous data theft incidents. These included theft of email and other data by Appin from Pakistani and Chinese government officials. SentinelOne also found evidence of Appin carrying out defacement attacks on sites associated with the Sikh religious minority community in India and of at least one request to hack into a Gmail account belonging to a Sikh individual suspected of being a terrorist. "The current state of the organization significantly differs from its status a decade ago," says Tom Hegel, principal threat researcher at SentinelLabs. "The initial entity, 'Appin,' featured in our research, no longer exists but can be regarded as the progenitor from which several present-day hack-for-hire enterprises have emerged," he says. Factors such as rebranding, employee transitions, and the widespread dissemination of skills contribute to Appin being recognized as the pioneering hack-for-hire group in India, he says. 


Security Firm COO Hacked Hospitals to Drum Up Business

According to the plea agreement, Singla on Sept. 27, 2018, knowingly transmitted a command that resulted in an unauthorized modification to the configuration template for the ASCOM phone system at Gwinnett Medical Center's Duluth hospital campus. As a result, all of the Duluth hospital's ASCOM phones that were connected to the phone system during Singla's transmission were rendered inoperable, and more than 200 ASCOM handset devices were taken offline, the court document says. Those phones were used by Duluth hospital staff, including doctors and nurses, for internal communication, including for "code blue" emergencies. The ASCOM phones were used to place calls outside of the hospital, the court document says. On that same day, Singla - without authorization - obtained information including names, birthdates and the sex of more than 300 patients from a Hologic R2 Digitizer connected to a mammogram machine at Gwinnett's Lawrenceville hospital campus, the document says. The digitizer, which was accessible through Gwinnett's virtual private network, was protected by a password. 


How to Structure and Build a Team For Long-Term Success

Leaders have to be careful not to get caught in a situation where somebody could misconstrue their kindness or attention, but being in leadership doesn't have to mean sacrificing gaining friendships. Balance being too friendly with being able to offer necessary corrections. By nature, I tend to be a people pleaser, so I must work on being tougher — especially early in relationships. After my collegiate basketball career ended, I became a high school basketball referee. I found that the whole game went smoother if I was tough in the first quarter of a game. It is important to establish a sense of control when they first hire a new team member, and then they can infuse the second, third and fourth quarters with more friendship. Leaders can have situations that test the relationships they're working to build. Let's say someone has two people on their team, and they have to decide which one gets promoted. The one who didn't get promoted might feel like the leader let them down. Leaders must maintain enough professional distance so that an employee knows it was not due to favoritism in this situation.


Data is Everybody’s Business: The Fundamentals of Data Monetization

Companies get better at data monetization by practicing it. “Rather than wait for the right set of capabilities to magically appear,” Owens says, “businesses should start engaging in monetization activities. The learning and the returns come from doing, not from talking about doing. For starters, organizations could choose one process or product to improve or a single business challenge to solve with data.” Creating data assets also means creating organizational governance so that the right people use the data in the right ways. Data assets can be monetized only after data is properly cleaned, permissioned with the right security, and made accessible to authorized users. “If you aren’t purposely managing and monetizing your data, it won’t pay off,” says Wixom. A big problem with data is that everybody is starting from scratch all the time, says Wixom. “There isn’t enough attention to accumulating knowledge and skills for the future benefit of the organization. But if you create data assets and establish enterprise capabilities to manage them properly, data can be reused limitlessly for all kinds of value-creating reasons across an organization.”


Blockchain could save AI by cracking open the black box

Blockchain is finally being unchained from crypto, and many now see its potential as a foundation of support and validation for another emerging technology -- AI. Blockchain -- and other distributed ledger technologies -- could even help solve AI's black box problem "by providing a transparent, immutable ledger to monitor model training and trace decision-making processes," according to the authors of a new report. "This gives organizations the ability to audit the data and algorithms used, enabling greater security and trust in AI systems." ... "As AI operations go mainstream -- and as people raise concerns about the technology -- leaders are recognizing the need for a more responsible AI that prioritizes data security and transparency," the survey's authors point out. "Ensuring trustworthiness and reliability of their AI tools is a top priority for businesses, and blockchain is the turnkey solution for addressing the risks that come with AI implementation." Executives have developed a greater level of understanding of blockchain. Seventy-seven percent say they fully understand blockchain and can explain the value of it to their teams -- up five percentage points over last year's survey. 


FinOps Debuts Cloud Transparency Standards

Given that the project is backed by the largest players in the multi-billion dollar cloud market, several large enterprise-level users such as Goldman Sachs and Walmart, have also backed this initiative. “We are establishing FOCUS as the cornerstone lexicon of FinOps by providing an open source, vendor-agnostic specification featuring a unified schema and language,” says Mike Fuller CTO at the FinOps Foundation. “With this release, we are paving the way for FOCUS to foster collaboration among major cloud providers, FinOps vendors, leading SaaS providers and forward-thinking FinOps enterprises to establish a unified, serviceable framework for cloud billing data, increasing trust in the data and making it easier to understand the value of cloud spend,” Fuller said in a statement. As readers would know, cloud operators provide customers with billing data providing the costs of services they use, which also includes granular details around individual product costs, and discounts, if any. Businesses use this billing data from the service providers to track their spends, forecast future costs and build their SaaS budgets.



Quote for the day:

"Pursue one great decisive aim with force and determination." -- Carl Von Clause Witz

Daily Tech Digest - November 17, 2023

Here is how far we are to achieving AGI, according to DeepMindDeep

Mind presents a matrix that measures “performance” and “generality” across five levels, ranging from no AI to superhuman AGI, a general AI system that outperforms all humans on all tasks. Performance refers to how an AI system’s capabilities compare to humans, while generality denotes the breadth of the AI system’s capabilities or the range of tasks for which it reaches the specified performance level in the matrix. ... DeepMind suggests that an AGI benchmark would encompass a broad suite of cognitive and metacognitive tasks, measuring diverse properties, including linguistic intelligence, mathematical and logical reasoning, spatial reasoning, interpersonal and intrapersonal social intelligence, the ability to learn new skills, and creativity. However, they also acknowledge that it is impossible to enumerate all tasks achievable by a sufficiently general intelligence. “As such, an AGI benchmark should be a living benchmark. Such a benchmark should therefore include a framework for generating and agreeing upon new tasks,” they write.


Utilizing A Business Information Security Officer

Unfortunately, a majority of CISOs are spending their limited time firefighting issues rather than contributing to business strategy or forging relationships. This is where a business information security officer (BISO) can come in. According to Forrester, the BISO operates on behalf of the CISO, serving as an advisor and bridge to functional leaders. In other words, it’s a security role that puts business first. ... Security culture can be defined as the values, attitudes, customs, beliefs, and social behaviors that influence the security posture of an organization. It’s the stuff that drives secure behavior in employees (even when no one’s watching); it’s the security instinct that kicks in when someone sees something unusual or suspicious. Traditionally, most CISOs are not in close contact or communication with employees, and therefore, it is difficult for them to influence and promote a positive security culture. With the BISO role, it's different; since the BISO enjoys closer ties with various business groups and has a better understanding of employee requirements and sentiments, they are better positioned to influence culture change.


Russian Hackers Linked to 'Largest Ever Cyber Attack' on Danish Critical Infrastructure

On the 11 companies that were successfully infiltrated, the threat actors executed malicious code to conduct reconnaissance of the firewall configurations and determine the next course of action. "This kind of coordination requires planning and resources," SektorCERT said in a detailed timeline of events. "The advantage of attacking simultaneously is that the information about one attack cannot spread to the other targets before it is too late." "This puts the power of information sharing out of play because no one can be warned in advance about the ongoing attack since everyone is attacked at the same time. It is unusual – and extremely effective." A second wave of attacks targeting more organizations was subsequently recorded from May 22 to 25 by an attack group with previously unseen cyber weapons, raising the possibility that two different threat actors were involved in the campaign. That said, it's currently unclear if the groups collaborated with each other, worked for the same employer, or were acting independently.


Demystifying Event Storming: A Comprehensive Guide to Understanding Complex Systems

Event Storming is a powerful and collaborative workshop-based technique used to gain a deep understanding of complex systems, processes, and domains. It was introduced by Alberto Brandolini, an expert in domain-driven design and Agile software development. Event Storming stands out as a versatile and practical approach that brings together stakeholders, domain experts, software developers, and business analysts to unravel the intricacies of a system. At its core, Event Storming is a visual modeling method that uses sticky notes and a large workspace, such as a wall or whiteboard, to represent various events and interactions within a system. These events can range from user actions, system processes, and domain events to business rules and policies. The workshop participants collaboratively contribute to this visual representation, creating a shared understanding of the system’s behavior and business processes. Event Storming starts with a domain description, then Big Picture. Big Picture Event Storming is the first step in understanding complex systems. 


Integration of Cybersecurity into Physical Security Realm

In this era of evolving cyber threats, integrating cybersecurity into physical security is not a choice but an organizational necessity. In most cases, physical security and cybersecurity work as two sides of the same backbone in the spectrum of security. Physical security protects tangible assets, such as buildings, equipment, and people, from physical threats. On the other hand, cybersecurity focuses on safeguarding digital assets, information, and systems from virtual threats, including hacking, malware, and data breaches. In this context, just as a backbone provides structural support to the body, the integration of cybersecurity into the realm of physical security offers a robust defense for an organization. This fusion of digital and physical security measures creates a comprehensive defense strategy which safeguards firm-specific assets, personnel, and data from various threats. This integrated approach ensures not only the protection of the physical infrastructure but also the prevention of unauthorized access, tampering, and disruptions caused by cyberattacks. It is paramount to understand how cybersecurity enhances physical security systems’ resilience.


Bridging the expectation-reality gap in machine learning

The engineers who wrangle company data to build ML models know it’s far more complex than that. Data may be unstructured or poor quality, and there are compliance, regulatory, and security parameters to meet. There is no quick-fix to closing this expectation-reality gap, but the first step is to foster honest dialogue between teams. Then, business leaders can begin to democratize ML across the organization. Democratization means both technical and non-technical teams have access to powerful ML tools and are supported with continuous learning and training. Non-technical teams get user-friendly data visualization tools to improve their business decision-making, while data scientists get access to the robust development platforms and cloud infrastructure they need to efficiently build ML applications. At Capital One, we’ve used these democratization strategies to scale ML across our entire company of more than 50,000 associates. When everyone has a stake in using ML to help the company succeed, the disconnect between business and technical teams fades. So what can companies do to begin democratizing ML? 


Generative AI Solution Architecture for Complex Enterprises

Generative AI tends to be non-deterministic (running it multiple times even with the same input may result in different behaviour each time it is run). Therefore, how we design, manage and test it needs different thinking from more traditional deterministic technologies. As with machine learning in general, maths and algorithms that are inaccessible to the average person (without knowledge of statistics and data science) create issues in understanding and transparency. Add to this the complexity of enterprise architecture (business, data, applications and applications) in modern organisations, and explainability becomes even more difficult. This non-deterministic behaviour also creates consistency, reliability and repeatability challenges. ... Scaling systems powered by machine learning is challenging. Creating an algorithm in a lab environment that comes up with an answer in several hours is OK for a one-off exercise, but simply won’t cut it for real-time customer interaction at scale. It won’t simply be the performance of the ML models; how you integrate these models with enterprise data stores and systems of record will also impact performance. 


Why cyber war readiness is critical for democracies

We can’t talk about the war in Ukraine and not mention cyber attacks aimed at disrupting operational technology (OT) used by companies that are part of the country’s critical infrastructure (CI). In his talk, Ferguson briefly passed through the known attacks that hit CI entities with OT-specific malware, starting with Stuxnet in 2010 and ending with CosmicEnergy in 2023. Some of the attacks are believed to be the work of the US and Israel (Stuxnet), cybercriminals or are still unattributed (the destructive 2014 attack against a steel plant in Germany). But the rest, he noted, are all believed to have been mounted by Russian state-backed attackers. And, he says, they are getting better at it. Mirroring the development of attacks against IT systems, they have recently begun exploiting legitimate tools found in OT environments, so they don’t need to develop customized malware. Many attackers are scanning for OT-specific protocols and probing OT devices, Ferguson noted. While their actual exploitation hinges on the skills of the attackers, some modes of attack are available to those who are less skilled, but eager. 


Unpatched Critical Vulnerabilities Open AI Models to Takeover

The risk is not theoretical: Large companies have already embarked on aggressive campaigns to find useful AI models and apply them to their markets and operations. Banks already use machine learning and AI for mortgage processing and anti-money laundering, for example. While finding vulnerabilities in these AI systems can lead to compromise of the infrastructure, stealing the intellectual property is a big goal as well, says Daryan Dehghanpisheh, president and co-founder of Protect AI. "Industrial espionage is a big component, and in the battle for AI and ML, models are a very valuable intellectual property asset," he says. "Think about how much money is spent on training a model on the daily basis, and when you're talking about a billion parameters, and more, so a lot of investment, just pure capital that is easily compromised or stolen." Battling novel exploits against the infrastructure underpinning natural-language interactions that people have with AI systems like ChatGPT will be even more impacting, says Dane Sherrets, senior solutions architect at HackerOne.


How to prepare for anything

In Latin, Audi means listen (translated from German horch, which is also the name of Audi’s founder, August Horch). And that is exactly what Mohr decided to do: listen to the questions he and his team were asking themselves. Mohr chose a human approach to understanding how change comes about. It revolves around understanding and embracing the five levels or shades of uncertainty on the journey toward transformation: wonder, skepticism, curiosity, doubt, and creativity. ... “We needed to understand what really matters to our people in order to initiate meaningful change,” Stine Thomssen, who is part of Mohr’s leadership team, recalls. Understanding what really matters to your employees requires insight into their ways of thinking and talking about their everyday practice. This calls for a systematic approach of tapping into the questions people are asking one another in your organization. In the case of the paint shop, it involved setting up several workshops and using a digital platform to engage employees in peer-to-peer conversations about the future. 



Quote for the day:

''The manager asks how and when, the leader aks what and why.'' -- Warren Bennis