Daily Tech Digest - May 12, 2023

The Industrywide Consequences of Making Security Products Inaccessible

Restricting access to security products creates situations where people from underrepresented groups are not able to easily catch up with their more fortunate peers who are already employed by enterprises with access to the latest tooling. In other words, companies publicly championing their efforts to increase diversity and get more people from underrepresented groups in the industry are actually making it harder for the same people to get into cybersecurity. It's not uncommon to see motivated and driven people from underrepresented backgrounds spend their free time studying and trying to level up their skills so they can move up the career ladder. While scholarships and grants are certainly helpful, what can be even more impactful is giving them access to tools they need to learn to develop new skills, build résumés, and get hired or promoted. ... It seems like most security vendors today create thought leadership content about how bad the talent shortage is for the industry, yet few are making it easy for people to become job ready by learning how to use their tools.


Open-Source Leadership to the European Commission: CRA Rules Pose Tech and Economic Risks to EU

As currently written, the CRA would impose a number of new requirements on hardware manufacturers, software developers, distributors, and importers who place digital products or services on the EU market. The list of proposed requirements includes an "appropriate" level of cybersecurity, a prohibition on selling products with any known vulnerability, security by default configuration, protection from unauthorized access, limitation of attack surfaces, and minimization of incident impact. The list of proposed rules also includes a requirement for self-certification by suppliers of software to attest conformity with the requirements of the CRA, including security, privacy, and the absence of Critical Vulnerability Events (CVEs). The problem with these rules, explained Mike Milinkovich, executive director of the Eclipse Foundation, in a blog post, is that they break the "fundamental social contract" that underpins open-source, which is, simply stated, that its producers of that software provide it freely, but accept no liability for its use and provide no warranties.


White House addresses AI’s risks and rewards

While Schiappa agreed that AI can exploit vulnerabilities with malicious code, he argued that the quality of the output generated by LLM is still hit and miss. “There is a lot of hype around ChatGPT but the code it generates is frankly not great,” he said. Generative AI models can, however, accelerate processes significantly, Schiappa said, adding that the “invisible” part of such tools — those aspects of the model not involved in natural language interface with a user — are actually more risky from an adversarial perspective and more powerful from a defense perspective. Meta’s report said industry defensive efforts are forcing threat actors to find new ways to evade detection, including spreading across as many platforms as they can to protect against enforcement by any one service. “For example, we’ve seen malware families leveraging services like ours and LinkedIn, browsers like Chrome, Edge, Brave and Firefox, link shorteners, file-hosting services like Dropbox and Mega, and more. When they get caught, they mix in more services including smaller ones that help them disguise the ultimate destination of links,” the report said.


Start Your Architecture Modernization with Domain-Driven Discovery

Architecture modernization projects are complex, expensive, and full of risks. Starting with a Domain-Driven Discovery (DDD) focuses your team and improves your chances of success. ... There was a time when we started new Agile projects with a two-week Sprint 0 then launched right into coding the solution. Unfortunately, teams often found out later they wasted time and money on "building the wrong thing righter." The influences of Design Thinking and Dual-Track Agile and frameworks like Mobius have opened our collective eyes to the importance of a brief discovery for product work. ... We suggest using event storming workshops to clarify the business processes related to the in-scope systems. Start by choosing a primary process or experience to focus on, such as a new customer registration. Next, collaboratively identify every event in this end-to-end process. It’s important to focus on how it works today, not how it should work in the future. Then identify a subset of the events that are essential to the process and labels these Pivotal Events. 


Career Reinvention: Considering a Switch to Cybersecurity

A significant skills gap in the cybersecurity industry has created a unique opportunity for individuals from various backgrounds to enter the field. Employers are seeking new people who weren’t necessarily trained to be cyber defenders but who have fresh perspectives and the potential to learn. This situation creates a tremendous opportunity for career reinvention. In response to this talent gap, the industry has committed to providing the new hires the resources and support they need to reach their fullest potential and succeed in a new career space. ... Of course, candidates should work to understand all they can about the types of cyber roles they would be most qualified for and interested in taking on – the good and the bad. This will ensure no element of surprise later down the line that catches them off guard, potentially making their career switch regretfully. It is also essential that any decisions being made are based on desire and genuine interest. If one enjoys the work they do and has the opportunity to work with good people, the rest will follow. Making a career change can be stressful, so taking it one step at a time is the best way to approach a drastic reinvention.


Data Waste Is Putting Retail Loyalty at Risk — Here’s Why

According to Wenthe, data wastage — the efficient or ineffective use of data — has become a common blight among brands in all industries, and especially those in the retail, automotive, CPG, and entertainment spaces. “Data wastage comes in a variety of forms from data sources such as customer service, sales, or operations departments,” Wenthe says. “[It] is usually the result of a collection of unnecessary data, withholding relevant data from the right team, or failure to analyze or action on the data that has been collected.” While it’s not always easy to identify data wastage, Wenthe says time spent managing customer data collections is often the main culprit. According to Gartner, data inefficiencies can end up costing organizations an average of $12.9 million per year — a huge chunk of change for just about any company to lose. “This issue is important for any brand where their data remains disjointed and unable to interact with one another,” Wenthe says. Given the influx of data coming through new channels and departments, including customer service, sales, and operations, it’s no surprise to hear that retail brands right now are struggling. 


Israeli threat group uses fake company acquisitions in CEO fraud schemes

The targeted organizations had headquarters in 15 countries, but since they are multinational corporations, employees of these companies from offices in 61 different countries were targeted. The reason why the group is focused on large enterprises is in the lure they chose to justify the very large transfers they're after: company acquisitions. It's not unusual for such multinational companies to acquire smaller companies in various local markets. ... "​​First, members of the executive team are likely to send and receive legitimate communications with the CEO on a regular basis, which means an email from the head of the organization may not seem abnormal," the researchers said. "Second, based on the stated importance of the supposed acquisition project, it’s reasonable for a senior leader at the company to be entrusted to help. And finally, because of their seniority within the organization, there is presumably less red tape that would need to be cut through in order for them to authorize a large financial transaction."


Poison Control: Report Says Tech Workplace Toxicity Rising

Joel Davies, senior people scientist at Culture Amp, says senior leadership hold the keys to creating a better work culture. “There is a common belief that ‘people leave managers, not companies,’ but we have found perceptions of senior leadership tend to be more important for employee engagement and commitment than perceptions of one’s direct manager. Senior leaders are role models, whether they like it or not. The way they behave at work creates powerful social norms that can impact how the rest of the organizations behaves.” In a tough economic environment, Tsingos says transparency goes a long way to building a positive workplace perception. “We’re living in an era of uncertainty in the financial markets,” he says. “This pressure creates toxicity. How do you deal with that. You deal with that with transparency. You deal with openness, and you deal with it by investing in your people. You might have a big company laying off thousands of people -- but there are some people who may come back and who are thankful for the transparency. Because that employer was investing in them and treating them nicely.”


The art of leading in the AI age

In the digital era, the leader as a subject matter expert is typically a senior programmer who takes on the role and responsibilities of someone who helps everyone else understand the opportunities and risks of developing something that makes life easier in the short term, but more complex and difficult in the longer term. ... we look for leaders who mediate between different reasons to use (or not to use) technology, because the best facilitator is the one who is most likely to make room for different needs and thus help her fellow human beings design their own lives. This means that leaders primarily act as organizational midwives who use their own experience and expertise to help others trust themselves—and one another—to do a job none of them could do alone. In the digital era, the leader as an organizational midwife is typically a chief experience officer or a people leader who takes on the role and responsibilities of someone who nurtures a culture in which decisions on how something should and should not be used are made deliberately and intentionally by everyone.


The Building Blocks of Success: Is Data Mesh Right for My Organization?

In many ways, data mesh is a lot like Legos. It’s possible to make over 915 million different combinations from just six different Lego bricks. A data mesh can similarly be built in any way that works best for your organization: choose each component carefully and build the solution that most fits your needs. ... The traditional operating model of centralized data engineering requires fewer skilled technical resources as the business teams all share those resources. Decentralization can lead to each business team hiring and supporting their own technical teams, which requires more resources. On the one hand, this is one reason agility and speed-to-delivery is improved: there are more people delivering, perhaps with fewer competing demands on their time. ... The strongest candidate for a data mesh includes a compelling business case, strong buy-in and sufficient resources, and an organizational culture that supports it. If you have an approach that’s working for you — say, your organization is not domain-oriented and has centralized IT with fungible resources that are implemented alongside various projects — then data mesh likely isn’t the right investment at this time.



Quote for the day:

"Uncertainty is a permanent part of the leadership landscape. It never goes away." -- Andy Stanley

Daily Tech Digest - May 11, 2023

Will Rogue AI Become an Unstoppable Security Threat?

The rogue AI concept generally refers to AI systems that have been trained to generate or identify opportunities to exploit code or system vulnerabilities and then take some form of destructive action without human intervention, Saylors says. That action could be the creation of code known to be vulnerable and publishing it to a common code repository with the expectation it would be exploited at a later date. It could also be the active exploitation of vulnerabilities by the AI technology itself. The latter action is an extreme example, Saylors says, and generally only a concern for governments or high-profile enterprises, such as defense contractors and financial institutions. “Such organizations already tend to be under constant attack from well-funded APT groups,” he notes. Unfortunately, as sophisticated AI technologies such as ChatGPT become widely available, they will be trained to exploit code or system vulnerabilities. “I’m not saying ChatGPT, specifically, will do this, but I’m suggesting that bad actors will clone this type of technology and train it for nefarious use,” Saylors says.


Generative AI Will Transform Software Development. Are You Ready?

The coming convergence of generative AI and software development will have broad implications and pose new challenges for your IT organization. As an IT leader, you will have to strike the balance between your human coders—be they professionals or cit-devs—and their digital coworkers to ensure optimal productivity. You must provide your staff guidance and guardrails that are typical of organizations adopting new and experimental AI. Use good judgment. Don’t enter proprietary or otherwise corporate information and assets into these tools. Make sure the output aligns with the input, which will require understanding of what you hope to achieve. This step, aimed at pro programmers with knowledge of garbage in/garbage out practices, will help catch some of the pitfalls associated with new technologies. When in doubt give IT a shout. Or however you choose to lay down the law on responsible AI use. Regardless of your stance, the rise of generative AI underscores how software is poised for its biggest evolution since the digital Wild West known as Web 2.0.


AI outcry intensifies as EU readies regulation

AI offers both the potential to grow the business and a significant risk by eroding a company’s unique selling point (USP). While business leaders assess its impact, there is an outcry from industry experts and researchers, which is set to influence the direction future AI regulations take. In an interview with the New York Times discussing his decision to leave Google, prominent AI scientist Geoffory Hinton warned of the unintended consequences of the technology, saying: “It is hard to prevent bad actors from doing bad things.” Hinton is among a number of high-profile experts voicing their concerns over the development of AI. An open letter, published by the Future of Life Institute, has over 27,000 signatories calling for a pause in the development of AI, among them Tesla and SpaceX founder, Elon Musk – who, incidentally, is a co-founder of OpenAI, the organisation behind ChatGPT. Musk has been openly critical of advancement such as generative AI, but he is reportedly working on his own version. According to the Financial Times, Musk is bringing together a team of engineers and researchers to develop his own generative AI system and has “secured thousands of high powered GPU processors from Nvidia”.


Refined methodologies of ransomware attacks

“Rates of encryption have returned to very high levels after a temporary dip during the pandemic, which is certainly concerning. Ransomware crews have been refining their methodologies of attack and accelerating their attacks to reduce the time for defenders to disrupt their schemes,” said Chester Wisniewski, field CTO, Sophos. ... “With two thirds of organizations reporting that they have been victimized by ransomware criminals for the second year in a row, we’ve likely reached a plateau. The key to lowering this number is to work to aggressively lower both time to detect and time to respond. Human-led threat hunting is very effective at stopping these criminals in their tracks, but alerts must be investigated, and criminals evicted from systems in hours and days, not weeks and months. Experienced analysts can recognize the patterns of an active intrusion in minutes and spring into action. This is likely the difference between the third who stay safe and the two thirds who do not. Organizations must be on alert 24×7 to mount an effective defense these days,” said Wisniewski.


Automation: 3 ways it boosts productivity and reduces burnout

When we automate, we can carve out more time for the big stuff—and the more time we spend on the big stuff, the more engaged we become. Engaged employees aren’t just happier; they also create better customer experiences. Companies, in turn, can charge more for their services. The bottom line: Higher engagement is a win for everyone—companies, customers, and employees alike. To identify your most meaningful work, ask yourself what you enjoy doing the most and what delivers the most impact. For me, that’s writing and high-level strategizing. For a journalist, it might be drafting compelling narratives. For a designer, it might be brainstorming creative and beautiful ways to solve a customer’s problem. ... The benefits of automation are multifold: It increases engagement and productivity; it overcomes human limitations like the need to rest because with automation you set it and forget it; it minimizes errors; and it establishes processes that can be consistently refined. This list is not exhaustive. But here’s the rub: Automation can’t be established in a vacuum. 


NoOps vs. ZeroOps: What Are the Differences?

ZeroOps works from the philosophy that a company’s IT team is uniquely positioned to create innovation that services the organization — if it has time to think, rather than constantly chasing tickets or dealing with upkeep, that is. With more time free, IT teams might create new infrastructure that provides enhanced performance for specific corporate applications or might suggest ways in which current applications can be improved. The opportunities are limitless — if only operations teams had the time to do what they need to be doing! And with ZeroOps, they finally can. A ZeroOps provider works with the IT team to create an environment that is ideally suited to the organization, but in which the ZeroOps provider uses a combination of intelligent automation and remote support to relieve the IT team of the general burden of ensuring the system runs properly. Removing these burdens from a team’s shoulders allows them to place focus back on where it should have been in the first place. In other words, innovation and creation are actually possible again, instead of being bogged down by the backlog of things to do to keep everything running.


Quantifying the Value of Data to Business Leaders

The ROI of data is frequently obscured when critical data points fail to form a bigger picture, said Soares. For example, a modest profit from a particular business asset might not be tracked against a long-enough timescale to warrant its initial price tag. ... How is it possible to change business culture to recognize the true value of data? Soares suggested that there is an ultimately simple way to begin benchmarking across companies to assign data value without resorting to “voodoo economics.” “The value of a company’s data divided by the value of the company is what we call a data monetization index,” noted Soares. “And we have another metric called intangible asset index.” Data-related intangibles include customer data, employee data, reference data, reports, critical data elements, and more. How does one identify a critical data element? Soares estimates that roughly 10% of corporate data would fall under this category, though this number is contextual: What may be critical for one application may not be critical for another. 


Does Your Organization Need a CISO or an External Advisor?

The question on every leader’s mind now is, what is the best way to prepare? Should businesses hire a Chief Information Security Officer (CISO), or incorporate an advisor to the organization's board? Based on our work, we have several recommendations to navigate the best option for your organization: Each business context requires a different cybersecurity strategy. Factoring in the types of threats faced and their level of criticality is also key in the decision-making process. The different types of threats may include manufacturing facilities, high value IP (next generation tech, in particular if related to communications or weapons), infrastructure (e.g., energy generation or distribution), ransomware targets, and exploitation opportunities. Being open to exploring hybrid models can be a way to avoid missteps. What level of sophistication does your organization need in a CISO or advisor? Companies with low threat levels (are there any left?) or limited resources may want to rely on external vendors and advisors at early stages on their cybersecurity journey, rather than hiring a CISO immediately.


4 strategies for embracing ‘Everywhere Work’ in 2023

“When it comes to how and where employees work – leaders who do not embrace and enable flexibility where they can – also risk not reaping the benefits of a more engaged, more productive workforce,” said Jeff Abbott, CEO at Ivanti. Attracting and retaining the very best talent will always be an executive priority, but the organisations that embrace an Everywhere Work mindset – and supporting tech stack – will have a sustainable competitive advantage. There has been a seismic shift in how and where employees expect to get work done and it's imperative for leaders to break down culture and tech barriers to enable it.” As employees strive to strike a balance between work and personal life, they are pushing for new ways of working that help them reduce long commutes and minimise the negative impact on their health and well-being. Unfortunately, many employers are still hesitant to fully embrace virtual work arrangements, treating them as temporary solutions that may be reversed in the future. This reluctance to embrace remote work has led to widespread burnout and disengagement among knowledge workers, particularly younger employees.


Introducing the Data Trust Index: A New Tool to Drive Data Democratization

Data quality frameworks have traditionally focused solely on technical data quality dimensions; the Data Trust Index places a heavy emphasis on the social trust component of confirmability to account for the emotional and cultural factors that shape how people perceive and interact with data in their organizations. The adoption and implementation of data quality frameworks have typically been regarded as the necessary step for any organization wishing to promote data democratization. Good quality data will increase use of the data, or so the logic goes. Our conviction is that a data quality framework is only the necessary first step, that true data democratization requires a holistic approach that appeals to both the logical and emotional sides of people. The Data Trust Index brings data trust out of the realm of sterile dashboards and into something tangible that instills confidence in data and helps create a culture of trust around data. We developed the critical components of the Trust Framework (Credibility, Consistency, Confirmability) over many conversations about what was working and what wasn’t for our clients seeking benefits out of investments in data.



Quote for the day:

"To be successful, you have to have your heart in your business, and your business in your heart." -- Thomas Watson, Sr.

Daily Tech Digest - May 10, 2023

The one true way to prove IT’s value to your CEO

For most IT departments, this is a very difficult question to answer because the systems that we develop are not used by IT but are used by other departments to increase their sales, reduce their expenses, or be more competitive in the marketplace. As such, an IT leader’s usual response to this question is a general statement about how IT has implemented projects across the corporation that have achieved corporate strategic objectives. ... The second and better way to approach the problem of IT value is to measure the effectiveness of the IT operation. Why should IT be the only department that is immune from corporate oversight? The advertising department is routinely measured on whether it is increasing corporate sales. HR is constantly being questioned on how its salary system compares to the industry. Manufacturing is always being challenged on its costs and if there are alternative methods and locations. Marketing must assure top management that its brand positioning is the best for the company. The only way to measure IT is to enforce a requirement that all large scale new or modified system projects are analyzed, after completion, to verify that the objectives were met and the ROI was proven.


Evil digital twins and other risks: the use of twins opens up a host of new security concerns

Pittman says he sees other new attack scenarios arising from the use of digital twins; for example, if hackers are able to break into a digital twin environment, they could either steal the data or, depending on their motives, could manipulate the data used by the digital twin to deliberately skew the simulation outcomes. Given the potential for such scenarios, Pittman adds: “I think this is another instance in which we’re propagating technology without necessarily thinking about the repercussions. I’m not saying that’s good or bad; we’re humans, and it’s what we do really well. And while I don’t think we’re going to see something catastrophic, I think we’ll see something significant.” Pittman isn’t the only one voicing concerns about the potential for new security threats arising from digital twins. ... “We didn’t look at it specifically for the report, but that’s one of the issues that came up,” he says, adding that it’s a frequently-mentioned concern around training data used in machine learning algorithms — an attack type known as “data poisoning.”


A brief history of tech skepticism

Why have so many been so skeptical of developments whose success, in hindsight, seems obvious? One reason is that some technologies take time to reach maturity and mass adoption—and rely on the development of infrastructure that doesn’t yet exist. The ancient Greeks invented the aeolipile steam engine some 1,700 years before Thomas Newcomen created one deemed useful for industrial work. It took another 65 years before James Watt’s adaptations ushered in the true age of steam, a further quarter-century before the first steam locomotives began to appear, and another 20-odd years before the first passenger services became available. On this time line, the metaverse is in its infancy. Some shrewd observers—like author Matthew Ball, one of the world’s leading metaverse analysts—expect it will be years if not decades before the idea reaches its full potential. As humans, we are afflicted with tendencies that can skew our ability to objectively assess the potential of unfamiliar things. Our cognitive biases condition us to be suspicious of that which is novel or different. 


Prevent attackers from using legitimate tools against you

Lately, actors have been using remote monitoring and management (RMM) software to gain access to or maintain persistence in the systems. According to our team’s telemetry, this includes commonly used RMM software such as ConnectWise Control (formerly ScreenConnect), AnyDesk, Atera and Syncro. However, attackers are fully aware that defenders monitor for these known RMMs and are continually looking for alternate options. There was recently a case where Action1 and SimpleHelp RMM was abused to deploy ransomware. It’s not just third-party tools that are being abused either. Attackers also try to kill or stop processes using built-in Windows processes such as taskkill or the net stop command to stop processes related to backup, which may potentially halt ransomware operations. Attackers can use legitimate binaries or tools that are part of operating systems to carry out malicious activities. These binaries are often referred to as LOLBins (“Living off the Land Binaries”). Some commonly used LOLBins are WMIC, PowerShell, Microsoft HTA engine (mshta.exe), and certutil. 


Network Administrator Skills: The Essential Job Toolkit

Problem-solving skills - Unlike troubleshooting, which requires rapid action to resolve immediate network issues, problem-solving is a technique used to address persistent concerns, such as slow performance, sluggish Internet connections, and Wi-Fi dead spots. Network administrators can keep their networks running smoothly by addressing performance, reliability, and security issues as they appear. "They must be able to identify and diagnose problems, develop and implement effective solutions, and communicate clearly with team members and stakeholders," says Peter Zendzian, president of managed service provider ZZ Servers. ... Critical thinking skills - Perhaps the most important skill a network administrator can possess is the ability to think critically. Critical thinking is the analysis of available facts, evidence, observations, and arguments to form a judgment. "This skill is valuable because it allows the network administrator to identify and resolve issues quickly and efficiently," says Timothy Mcknight, CEO of technology and cybersecurity firm Multitechverse. 


12 Ways to Approach the Cybersecurity Skills Gap Challenge in 2023

Finding ways to attract more diverse candidates for cybersecurity jobs could help fill more roles. “Prioritizing diverse hiring can help your company get an edge over other competitors in the market when it comes to recruitment of potential talent,” says Travis Lindemoen, managing director of IT staffing agency Nexus IT Group. How can companies approach diverse hiring? “If you want to be able to hire diverse candidates and underrepresented minorities, some of the things that [you] need to do, and things that we've done ourselves, is ensure that you’re putting inclusive language and narratives into your communications, into your job descriptions,” says Cross. Companies can also look to foster partnerships with organizations that help to promote diversity in the workforce. For example, Dell Technologies works with historically black colleges and universities (HBCUs). The HBCU Partnership Challenge, launched in 2017, aims to increase career prospects for HBCU students. In 2023, Cybersecurity and Infrastructure Security Agency (CISA) announced a partnership with nonprofit Women in CyberSecurity (WiCyS) to work on addressing the gender gap in cybersecurity and technology.


FBI Disarms Russian FSB 'Snake' Malware Network

For nearly 20 years, threat group Turla, operating inside the FSB's notorious Center 16, used Snake malware to steal secrets from North Atlantic Treaty Organization (NATO)-member governments, according to an announcement from the US Attorney's Office in the Eastern District of New York. Following compromise of target government systems, Turla would exfiltrate sensitive data through a network of compromised machines spread throughout the US and beyond to make detection harder, the DoJ said. The FBI developed a tool named Perseus, which was able to successfully command components of the Snake malware to overwrite itself on compromised systems, the DoJ added. "For 20 years, the FSB has relied on the Snake malware to conduct cyberespionage against the United States and our allies — that ends today," Assistant Attorney General Matthew G. Olsen of the Justice Department's National Security Division said in the statement. "The Justice Department will use every weapon in our arsenal to combat Russia’s malicious cyber activity, including neutralizing malware through high-tech operations, making innovate use of legal authorities, and working with international allies and private sector partners to amplify our collective impact.”


AI push or pause: CIOs speak out on the best path forward

“There is a catchup game here. To this end and in the meantime managing AI in the enterprise lies with CxOs that oversee corporate and organizational risk. CTO/CIO/CTO/CDO/CISOs are no longer the owners of information risk” given the rise of AI, the CIDO maintains. “IT relies on the CEO and all CxOs, which means corporate culture and awareness to the huge benefits of AI as well as the risks must be owned.” Stockholm-based telecom Ericsson sees huge upside in generative AI and is investing in creating multiple generative AI models, including large language models, says Rickard Wieselfors, vice president and head of enterprise automation and AI at Ericsson. “There is a sound self-criticism within the AI industry and we are taking responsible AI very seriously,” he says. “There are multiple questions without answer in terms of intellectual property rights to text or source code used in the training. Furthermore, data leakage in querying the models, bias, factual mistakes, lack of completeness, granularity or lack of model accuracy certainly limits what you can use the models for.


Cybersecurity stress returns after a brief calm: ProofPoint report

“Having conquered the unprecedented challenges of protecting hybrid work environments during the pandemic, security leaders felt a sense of calm. Although attack volumes did not abate, CISOs had a brief period of reprieve as they felt their organizations were less at risk,” Stacy said. The report also noted a strong willingness to pay ransoms, with 62% of CISOs saying they are ready to pay to restore systems and prevent data release if attacked by ransomware in the next 12 months. This perhaps has to do with 61% of them having a cybersecurity insurance in place for various types of attacks. “Profitability at insurance companies offering cyber insurance has already taken a hit due to the raft of ransomware-related payouts in recent years,” said Michael Sampson, senior analyst at Osterman Research. “We have already seen cases where premiums have doubled for half the coverage. It has been becoming more and more expensive to secure cyber insurance. Some are even likely to withdraw completely from offering coverage, given the negative trends.”


Mitigate Risk Beyond the Supply Chain with Runtime Monitoring

DevSecOps pipelines and golden paths are put in place to ensure that changes made to a system follow a defined process and are authorized before deployment. This helps maintain system stability, ensure compliance and mitigate risks. But pipeline controls have one obvious limitation when it comes to ensuring the security and compliance of an entire software system. They can only ensure security and compliance for changes that have gone through the pipeline. They do not account for bad actors who access production by going around the golden path. There are several key security questions that cannot be answered in the pipeline:How do we discover workloads that haven’t gone through our pipeline? What happens if an internal developer has the keys to production? What happens if we are breached? What happens if our deployment process has silent failures? Think of a golden pipeline as a river running into a lake. Monitoring what’s flowing in the river does not guarantee the quality of the water in the lake. You need to monitor the quality of the water in the lake too!



Quote for the day:

"Leaders must encourage their organizations to dance to forms of music yet to be heard." -- Warren G. Bennis

Daily Tech Digest - May 09, 2023

A Guide to Steganography: Meaning, Types, Tools, & Techniques

Steganography encodes a secret message within another non-secret object in such a manner as to make the message imperceptible to those who aren’t aware of its presence. Of course, because of this secrecy, steganography generally requires the recipient to be aware that a message is forthcoming. To understand the meaning of steganography, it’s important to know the origins of the technique. The practice of steganography dates back to ancient Greece, from which we also get the word itself: a combination of the Greek words “steganos” (covered or concealed) and “graphein” (writing). ... As you might imagine, steganography can be used for both good and ill. For instance, dissidents living under oppressive regimes can use steganography to hide messages from the government, passing sensitive information within a seemingly innocuous medium. However, digital steganography is also a tool for malicious hackers. An attacker can hide the source code for a malware application inside another supposedly harmless file (such as a text file or an image). A separate program can then extract and run the source code.


How to Trim Your Cloud Budget

An essential first step in cloud budget trimming is to ask the enterprise’s FinOps team to evaluate current usage, Orshaw advises. “You need to have a clear understanding of what you’re using and how much you’re paying,” he says. “Start by looking at your cloud bills and identifying any unused or underutilized resources.” Optimizing current cloud resources can help bring a soaring budget under control. “This means resizing instances, eliminating instances that are no longer needed, and adopting a more granular approach to resource allocation,” Orshaw says. Automated tools can aid in this process, he adds. Virtually all cloud service providers offer some form of cost optimization support. “Understanding these tools and techniques … save organizations a lot of money in the long term,” Ozdemir says. Also consider taking advantage of reserved instances, Orshaw advises. “Reserved instances offer a significant discount over on-demand instances, but require a commitment of at least one year,” he explains. “Reserved instances are best for workloads with predictable usage patterns.”


How Security Architects Fit Into Organizations

The best-known security architecture domains are identity and access management and network security. The latter works on zoning and firewall topics (i.e., how to structure a network to hinder lateral movements while allowing components and applications to interact). Identity and access management covers authentication and authorization for internal employees, but nowadays also for customers, partners, and suppliers interacting with company services and applications. Active directory, LDAP, and identity provider are technologies and buzzwords in this area. The expansion and growth of CISO organizations drive their need for tool support to ensure efficiency, especially for logging network and IAM events, identifying potential attacks, and security incident management. Splunk, Sentinel, Microsoft Defender, and Jira are typical solutions for turning log events into actionable items and managing potential security incidents. Architects help with the initial design and maintain and evolve such solutions over the years.


Overcoming The Dark Side Of Being A Problem-Solver

The truth is, harnessing the superpower of problem solving can be like wielding a double-edged sword. On one hand, it's an essential skill that allows us to navigate through life's challenges and find solutions to complex problems. On the other hand, when taken too far, it can lead to overthinking, anxiety, and a lack of trust in ourselves and others. When we're accustomed to taking charge and finding solutions to challenges, we easily become critical of others and their ability to solve problems. We start to believe that we're the only ones who can fix the issue effectively, while everyone else is incompetent. This lack of trust also extends to ourselves. Constantly anticipating problems and overthinking every situation forces us to doubt our abilities and decisions. We become paralyzed by the fear of making the wrong decision or taking the wrong action, leading to procrastination, analysis paralysis and missed opportunities. So how do we overcome this problem of being a problem solver? How do we ensure our superpowers don't morph into weaknesses? 


9 upskilling tips that pay dividends

CIOs shouldn’t feel they have a responsibility to upskill only their own employees — they should upskill any employee with some degree of technical skills, Ramirez stresses. This is because “we’re shifting toward skills-based staffing to help close the talent gap. It’s the idea that great talent can come from anywhere.” This can be done by utilizing learning platforms and talent marketplaces, where IT employees share their strengths. One way of doing this is by IT posting small projects that employees can work on together, which they find out about through a talent marketplace. ... The speed with which technology changes requires every employee who cares about their job to upskill and train, and Long wants to make that a shared responsibility. “We as a company want to improve skills, but I remind employees they’re the custodian of their career.” Employees have an annual meeting with their manager to set goals in terms of jobs and skills, and Long says he and other leaders are there to help and provide mentorship. From there, it is incumbent upon the employee to schedule a meeting with their manager once a month or quarter to update them on what they’ve done on their development plan, he says.


Review your on-prem ADCS infrastructure before attackers do it for you

If your firm is like a typical firm, your Active Directory infrastructure has been in place for many years. As a result, you may have older settings, leftover services, and older forest and domain settings. Pentesters and attackers will often use the ADCS attacks to showcase how trivial it can be to gain access. As Spectorops have showcased in a whitepaper on the topic, there are several methods to run attack techniques. If your Active Directory certificate template permits client authentication and allows an enrollee to supply an arbitrary subject alternative name (SAN), the attacker can request a certificate based on the vulnerable template and specify an arbitrary SAN. Thus, if the attacker has a password gleaned from a user authenticated on the domain, they can then use various tools to request a certificate and specify that it has the domain administrator as the SAN field. You can already see what’s coming next, because the attacker requested a certificate and has received it with the equivalent of domain administrator rights. Even if you’ve already fixed this potential for breach and pivot in-house, I’d argue that you’d still want to reach out to any consultant you rely on — if they have a weakness, you share the risk.


What happens when we run out of data for AI models

One of the most significant challenges of scaling machine learning models is the diminishing returns of increasing model size. As a model’s size continues to grow, its performance improvement becomes marginal. This is because the more complex the model becomes, the harder it is to optimize and the more prone it is to overfitting. Moreover, larger models require more computational resources and time to train, making them less practical for real-world applications. Another significant limitation of scaling models is the difficulty in ensuring their robustness and generalizability. Robustness refers to a model’s ability to perform well even when faced with noisy or adversarial inputs. Generalizability refers to a model’s ability to perform well on data that it has not seen during training. As models become more complex, they become more susceptible to adversarial attacks, making them less robust. Additionally, larger models memorize the training data rather than learn the underlying patterns, resulting in poor generalization performance. Interpretability and explainability are essential for understanding how a model makes predictions.


5G Networks Are Performing Worse. What’s Going On?

The amount of 5G performance degradation isn’t consistent from country to country, and there are a handful of countries bucking the general trend. Ookla’s speed-test data identifies four: Canada, Italy, Qatar, and the United States. That said, Giles doesn’t believe that means there’s necessarily any common denominator between them. For the United States, Giles suggests, more availability of new spectrum has so far helped operators in the country stay out ahead of growing congestion on the new networks. In Qatar, by contrast, the massive investment around the 2022 FIFA World Cup included building out robust 5G networks. It’s too early to say whether or how 6G development will be affected by 5G’s early stumbles, but there are a handful of possible impacts. It’s conceivable, for example, given the lackluster debut of millimeter-wave, that the industry devotes less time in terahertz-wave research and instead considers how cellular and Wi-Fi technologies could be merged in areas requiring dense coverage.


Radical Transparency: How a Strong Startup Culture can Deliver Success

Culture is a reflection of a company's core values in action. If you know what you want your company to be, the people you want to attract and the type of service you want to be known for, you can define a base set of principles to act as a guiding light. This can keep a company on track and create a body of highly motivated overachievers that are not only incredibly driven, they’re personally invested and incentivized to bring the company and their teams along with them for the ride as they build the business together. Key to this for us has been embracing radical transparency, internally and externally. This enables us to show, not just tell, their true values across every aspect of a company and team. While not easy, it’s an investment that employees and customers appreciate, reward and reciprocate. For example, we allow employees to fully access just about all company data no matter if it relates to customer support, finances or any other area. This is the foundation of a business model that has existed from our outset.


To enable ethical hackers, a law reform is needed

What’s needed is fresh eyes and an outsider mentality to see where issues exist. This is where ethical hacking comes in. An organization can have a legion of external researchers on their side probing continuously for any weaknesses, uncovering vulnerabilities that automated scans and internal teams miss, performing recon to discover new insecure assets. Like cybercriminals, hackers will also be leveraging tools such as publicly available Common Vulnerabilities and Exposures (CVE) databases. They go beyond CVEs in known applications to discover and examine hidden assets that potentially pose a greater risk. One-third of organizations say they monitor less than 75% of their attack surface and 20% believe over half of their attack surface is unknown or not observable. So, it’s easy to understand why cybercriminals with significant and often cheap labor power plus an array of techniques target unknown assets and regularly uncover exploitable vulnerabilities. The way to keep pace and avoid burnout in internal security teams is to engage hackers to work on their behalf by setting up a vulnerability disclosure program (VDP).



Quote for the day:

"Most people live with pleasant illusions, but leaders must deal with hard realities." -- Orrin Woodward

Daily Tech Digest - May 07, 2023

How Modern Data Platforms Support Data Governance

To enable the effective use of data analytics, many organizations are employing modern data platforms, which provide capabilities such as nearly unlimited flexibility for data collection, clear visibility into data sets and data democratization to make analytics available to users across an organization. But perhaps the most valuable capability a modern data platform can provide is data governance: the establishment of clear rules about the access and use of data, as well as the enforcement of those rules. “Governance is a cornerstone of the modern data platform,” says Rex Washburn, head of modern data platforms for CDW’s data practice. “If you don’t have data governance, you don’t have a modern data platform.” The governance that modern data platforms offer separates them from legacy data architectures. A modern platform can simplify and unify an organization’s data environment, enabling streamlined governance and security.


It’s Time to ‘Expunge’ Data Governance

A change is required in moving away from embedding it as a regulatory, watchdog, or policy compliance function, but rather as an essential value stream closely tied to a business strategy. This move, however, calls for data governance practitioners to acquaint themselves with their organizational goals and objectives. And having fully comprehended their business direction and related pain points, they will then be empowered to determine which data elements are most critical and in turn prepare and maintain these sustainably. A change in perspective towards ‘value-driven use cases’ ‘Data improvement and ethical handling’ sounds much better. It’s clearer, less intimidating, and crystal clear in its purpose, but it is a simplified form of data governance. And adopting similar approaches would enable data governance to be more easily understood, thereby increasing its adoption rate and building a strong stakeholder base. As this develops, data governance can serve as a strategic business enabler, with executive support and enhanced stakeholder involvement.


Good bot, bad bot: Using AI and ML to solve data quality problems

With the rise of human-like AI, bots can slip through the cracks through quality scores alone. This is why it’s imperative to layer these signals with data around the output itself. Real people take time to read, re-read and analyze before responding; bad actors often don’t, which is why it’s important to look at the response level to understand trends of bad actors. Factors like time to response, repetition and insightfulness can go beyond the surface level to deeply analyze the nature of the responses. If responses are too fast, or nearly identical responses are documented across one survey (or multiple), that can be a tell-tale sign of low-quality data. Finally, going beyond nonsensical responses to identify the factors that make an insightful response — by looking critically at the length of the response and the string or count of adjectives — can weed out the lowest-quality responses. By looking beyond the obvious data, we can establish trends and build a consistent model of high-quality data.


ChatGPT Comes to Business Continuity

ChatGPT has pulled back the curtain on the business continuity world. Business continuity, with all its regulations and oversight bodies still remains a somewhat subjective profession. Although regulations abound, each company has a unique way of creating its programs. Typically, a new business continuity professional going into a company assesses the previously program for gaps and completeness in a substantive way. Many times, once an assessment is completed, you can almost gauge the background and experience of the previous person holding that position. Modifications are made, gaps are filled, based on the limited understanding of business continuity as a whole. ChatGPT has uncovered the core of the foundational components of business continuity and all the ancillary components, a documented blueprint that could establish an excellent starting point. When looking for a baseline approach to the building of a complete program, there are many avenues one can go down with tentacles reaching far and wide.


Hardware-Based Cybersecurity For Software-Defined Vehicles

The Secure-CAV Consortium, collaborative project that aims to improve the safety and security of tomorrow’s connected and autonomous vehicles (CAVs), offers concrete examples of hacks. One is a mobile network attack in which an attacker tries to infect the Telematic Control Unit with tampered firmware. This uses a “man in the middle” type of attack to make an over-the-air firmware update. If successful, hackers could intercept telematics traffic using GSM and can spoof the SMS commands, sending direct commands to the device. The consequences range from the hackers gaining access to the infotainment unit, to denial-of-service attacks against emergency services, to controlling the engine, transmission, or brakes. ... The Secure-CAV Consortium has developed a flexible and functional architecture for real environment trials to train, test, validate, and demonstrate automotive cybersecurity solutions. The goal is to faithfully and accurately reproduce the behavior of a real vehicle while also being reconfigurable, portable, safe, and inexpensive to construct.


When you get to the top, send the elevator back down

There are so many demands in business—you can’t be everywhere all the time. Over the course of my career, I have learned it’s okay to say “no” and to prioritize what matters most to move the business forward. If you stay laser-focused on your priorities and not the distractions of the day, you will be more productive. It is important to say no to the things that distract you from your goals. ... The only limitations in life are those you put on yourself. I believe the glass ceiling—or any ceiling—is fragile and delicate. So, punch through it! Shift your mindset to focus on what’s possible and push through those boundaries. The world is your oyster. Know that career growth is a mindset versus physical limitations within the walls of corporate life. Many people look for a silver bullet to career growth. In my experience, people willing to do the work—the hard work—are often the ones who stand out and grow their careers faster. The attention to detail and doing the not-so-glamorous stuff make the difference between good and great.


Why DevSecOps Is Essential for Every IT Industry

In a traditional organization, the InfoSec team is responsible for keeping the company’s data safe from external threats. They do this by implementing security controls and monitoring for compliance. The problem is that these security controls can often slow down the software development process. ... The key to making DevSecOps work is a collaboration between the development, operations, and security teams. In a traditional organization, these teams often operate in silos, leading to conflict and delays. DevSecOps fosters a culture of collaboration and communication between these teams, which is essential for delivering secure software quickly. ... For example, they might use continuous integration/continuous delivery (CI/CD) pipelines to automate the software delivery process. They might also use security scanning tools to automatically find and fix security vulnerabilities in code and configuration management tools to ensure that all servers are properly configured and compliant with security policies.


Data Leakage Becoming Bigger Issue For Chipmakers

“If you have a chiplet-based approach, or a multi-chip package, then all of these chips have to work together to yield the security you need,” said Peter Laackmann, distinguished engineer for the Connected Secure Systems Division at Infineon. “For example, there have been attacks where there was a security chip inside, which was certified and quite good, but it was also in the same package as a standard microcontroller. The problem was that the standard microcontroller was fully controlling the security chip. After a few attacks on the microcontroller, then you get the keys. This means the security controller cannot protect the complete system. And the same applies for all sorts of chiplets and multi-chip packages.” Laackmann said that for security chips/chiplets, this is unlikely to be a problem because those chips typically are not stressed the way a processing element would be. But for other components, aging can cause circuits to behave differently, and that differential can be used to collect important data. “Some chips have pins that are used to supply the internal core voltage.


European Commission Proposes Network of Cross-Border SOCs

The commission late last month introduced a proposal for a European "Cyber Shield" underpinned by a network of national SOCs and cross-border SOCs that are a consortium of at least three national centers. The bill, the Cyber Solidarity Act, would also create a Cybersecurity Emergency Mechanism allowing governments to tap into private sector incident responses during emergencies. Even before Russia's February 2022 attempt to conquer Ukraine, European officials lamented poor information sharing between national capitals on cybersecurity incidents, noting in a 2020 cybersecurity strategy that "no operational mechanism" exists to coordinate among member countries and European Union institutions in the event of "a large-scale, cross-border cyber incidents or crisis." That omission has since grown more glaring for European Commission officials monitoring reports of suspicious critical infrastructure security incidents occurring since the Russian invasion.


Why generative AI is more dangerous than you think

Of course, the big threat to society is not the optimized ability to sell you a pair of pants. The real danger is that the same techniques will be used to drive propaganda and misinformation, talking you into false beliefs or extreme ideologies that you might otherwise reject. ... And because AI agents will have access to an internet full of information, they could cherry-pick evidence in ways that would overwhelm even the most knowledgeable human. This creates an asymmetric power balance often called the AI manipulation problem in which we humans are at an extreme disadvantage, conversing with artificial agents that are highly skilled at appealing to us, while we have no ability to “read” the true intentions of the entities we’re talking to. Unless regulated, targeted generative ads and targeted conversational influence will be powerful forms of persuasion in which users are outmatched by an opaque digital chameleon that gives off no insights into its thinking process but is armed with extensive data about our personal likes, wants and tendencies, and has access to unlimited information to fuel its arguments.



Quote for the day:

"We are too much in awe of those who succeed and far too dismissive of those who fail." -- Malcolm Gladwell

Daily Tech Digest - May 05, 2023

Data is choking AI. Here’s how to break free.

As enterprises deepen their embrace of AI and other data-driven, high-performance computing, it’s critical to ensure that performance and value are not starved by underperforming processing, storage and networking. Here are key considerations to keep in mind. Compute. When developing and deploying AI, it’s crucial to look at computational requirements for the entire data lifecycle: starting with data prep and processing (getting the data ready for AI training), then during AI model building, training, and inference. Selecting the right compute infrastructure (or platform) for the end-to-end lifecycle and optimizing for performance has a direct impact on the TCO and hence ROI for AI projects. End-to-end data science workflows on GPUs can be up to 50x faster than on CPUs. To keep GPUs busy, data must be moved into processor memory as quickly as possible. Depending on the workload, optimizing an application to run on a GPU, with I/O accelerated in and out of memory, helps achieve top speeds and maximize processor utilization.


New leadership for a new era of thriving organizations

Leading companies today seek to become learning organizations that are continually evolving, exploring, ideating, experimenting, scaling up, executing, scaling down, and exiting across many different activities in parallel. By accelerating change and allowing for positive surprises and innovations to flourish, they consistently outperform those companies that focus instead on always trying to deliver the “perfect” plan. We are in the midst of a profound shift in how work gets done, one that asks leaders to go beyond being controllers with a mindset of certainty to becoming coaches who operate with a mindset of discovery and foster continual rapid exploration, execution, and learning. Leaders and leadership teams can learn how to set and work toward outcomes rather than traditional key performance indicators; to foster rapid experimentation and learn from both successes and setbacks; and to manage risk differently, through testing, learning, and fast adaptation. The leadership practices enabling this shift include the following:operating in short cycles of decision, action, and learning.


The Fourth Industrial Revolution is here. Here’s what it means for the way we work

Herein lies the double-edged sword of the Fourth Industrial Revolution. Although smart machines and artificial intelligence are predicted to bring unimaginable efficiencies, they will do so by increasingly replacing a wide swath of existing human jobs. While historically jobs have always been around for human beings through technological revolutions, we have never had a technological revolution that has been capable of displacing so many human beings and so much human brain power as the one we are transitioning through now. According to a report from Oxford Economics, a global forecasting and quantitative analysis firm, smart machines are expected to displace about 20 million manufacturing jobs across the world over the next decade, including more than 1.5 million in the U.S. Other studies predict that smart machines, robotics, artificial intelligence, blockchain technology, 3D printing, and automation will put 20% to 40% of existing jobs at risk over the next decades. And a report from the Brookings Institution finds that 25% of U.S. workers will face “high exposure” and risk being displaced over the upcoming few decades. 


Even Amazon can't make sense of serverless or microservices

Beyond celebrating their good sense, I think there's a bigger point here that applies to our entire industry. Here's the telling bit: "We designed our initial solution as a distributed system using serverless components... In theory, this would allow us to scale each service component independently. However, the way we used some components caused us to hit a hard scaling limit at around 5% of the expected load." That really sums up so much of the microservices craze that was tearing through the tech industry for a while: IN THEORY. Now the real-world results of all this theory are finally in, and it's clear that in practice, microservices pose perhaps the biggest siren song for needlessly complicating your system. And serverless only makes it worse. What makes this story unique is that Amazon was the original poster child for service-oriented architectures. The far more reasonable prior to microservices. An organizational pattern for dealing with intra-company communication at crazy scale when API calls beat scheduling coordination meetings. SOA makes perfect sense at the scale of Amazon. 


The impact of ChatGPT on multi-factor authentication

As adoption of AI/ML-backed tools continues to grow, it will be important to focus on key ways to mitigate the risks associated with their use. When the efficacy of identity measures that companies have trusted for decades such as voice verification and video verification erodes, strongly linked electronic identity is even more important. Phishing-resistant credential solutions such as security keys — that are hardware-backed and purpose-built around cryptographic principles — excel in these scenarios. Security keys that support FIDO2 also ensure that these credentials are tied to a specific relying party. This binding prevents attackers from preying on simple human error. With security keys, credentials are securely stored in hardware which prevents those credentials from being transferred to another system without the user’s knowledge or by accident. The use of FIDO2 authenticators also greatly reduces the efficacy of social engineering through phishing as users cannot be tricked into vending a one-time password to an attacker, or have SMS authentication codes stolen directly through a SIM swapping attack.


Three Powerful Tactics Entrepreneurs Use For Instant Confidence

Tried and tested by entrepreneurs who have faced nerves and self-doubt, reminding yourself of what you have already achieved can give your confidence levels the boost they need. Create a metaphorical cookie jar of all your business and life wins and dip in for instant assurance. Samantha from ICI CARE keeps a list of her past wins and her big picture vision on the wall where she works, ensuring they are at eye level. "By having that reminder, I win over my brain before it spirals down,” she said. “Self-doubt is normal but I keep my focus and energy on achievement.” ... Confidence is a state of mind, which means it’s also a choice. Dr Amanda Foo-Ryland, founder of Your Life Live It, knows this well, explaining that it’s also, “about how you choose to see a new situation.” She knows, “I can either be confident or choose not to be.” Like Sarceno, she incorporates visualisation into the way ahead. “If I choose to be confident, I imagine the event and see myself in it being confident, being the person I want to be. I observe myself in the movie in my head.” 


White House unveils AI rules to address safety and privacy

This new effort builds on previous attempts by the Biden administration to promote some form of responsible innovation, but to date Congress has not advanced any laws that would rein in AI. In October, the administration unveiled a blueprint for a so-called “AI Bill of Rights” as well as an AI Risk Management Framework; more recently, it has pushed for a roadmap for standing up a National AI Research Resource. The measures don’t have any legal teeth; they are just more guidance, studies and research "and they’re not what we need now," according to Avivah Litan, a vice president and distinguished analyst at Gartner Research. “We need clear guidelines on development of safe, fair and responsible AI from the US regulators,” she said. “We need meaningful regulations such as we see being developed in the EU with the AI Act. ... US regulators need to step up their game and pace." In March, Senate Majority Leader Chuck Schumer, D-NY, announced plans for rules around generative AI as ChatGPT surged in popularity. Schumer called for increased transparency and accountability involving AI technologies.


Court Dismisses FTC Complaint Against Data Broker Kochava

The FTC in its lawsuit filed last August against Idaho-based Kochava said the company invades consumers' privacy by selling advertisers geolocation data sets of mobile phone holders tied to a unique ID. That information could be used to identify individuals who have visited abortion clinics, mental health providers and other sensitive locations, the agency said. Kochava filed its own lawsuit in the same Idaho federal court weeks before the FTC's action, as a bid to preemptively counter the federal agency. The company also filed a motion last October to dismiss the FTC's lawsuit. Winmill wrote in her Thursday ruling that nothing prevents the FTC from asserting that an invasion of privacy by itself can constitute a legitimate cause for suing. The agency failed, he said, by not establishing that Kochava's business practices constitute substantial injury to consumers. "The privacy concerns raised by the FTC are certainly legitimate. Disclosing where a person has been every fifteen-minutes over a seven-day period could undoubtedly reveal information that the person would consider private, such as their travel habits, medical conditions, and social or religious affiliations," he wrote.


The Merck appeal: cyber insurance and the definition of war

The war exclusion was found to be not applicable, and the court used the insurer’s own words to detail the “why” behind the denial. When read by a layman such as me, it appears the judges believed the insurers had ample time to adjust their policy dynamics and didn’t get around to it. ... That said, when a nation’s intelligence entities run covert operations, which Russia does on a regular basis, the goal of the government at hand is to always maintain plausible deniability any illegal acts. Could the NotPetya attack have been sponsored by the Russian Federation? Absolutely, and indeed, Kroll Cyber Security, the cyber consultant for the insurers, opined before the court “with high confidence” that the attack was “orchestrated by actors working for or on behalf of the Russian Federation.” Yet, one should note that when the US Department of Justice had the opportunity to pin the tail on that same donkey, they demurred. Thus, if a national government is not going to attribute nation-state sponsorship to an attack, then it will be most difficult for an insurance entity to successfully do so within the courts without explicit verbiage in the cybersecurity exclusions.


How the influence of data and the metaverse will revolutionize businesses and industries

From machine and building performance to energy and emissions, data is the crucial link between the physical and digital worlds. It’s also the key to solving efficiency and sustainability challenges that are now more urgent than ever. If the metaverse is meant to transform business and industries, it must be built on solid data foundations. ... Digital transformation started with connecting physical assets via IoT and edge controls. Its disruptive potential has proven to carry operational and energy efficiency across all levels of an enterprise. When we introduce powerful software capabilities and start leveraging the generated data, we can create virtual representations of the real world by combining simulation, augmented reality (AR), data sharing, and visualization all at once. ... It seems that all these and many more possible applications have something in common: they are all about bringing together technologies to address challenges of the physical world, by giving real people the means to learn, collaborate, act, and essentially create value through a virtual, digitally augmented space.



Quote for the day:

"You always believe in other people. But that's easy. Sooner or later you have to believe in yourself." -- Gary, The Muppets