Daily Tech Digest - September 25, 2023

Computer vision's next breakthrough

Beyond quality and efficiency, computer vision can help improve worker safety and reduce accidents on the factory floor and other job sites. According to the US Bureau of Labor Statistics, there were nearly 400,000 injuries and illnesses in the manufacturing sector in 2021. “Computer vision enhances worker safety and security in connected facilities by continuously identifying potential risks and threats to employees faster and more efficiently than via human oversight,” says Yashar Behzadi, CEO and founder of Synthesis AI. “For computer vision to achieve this accurately and reliably, the machine learning models are trained on massive amounts of data, and in these particular use cases, the unstructured data often comes to the ML engineer raw and unlabeled.” Using synthetic data is also important for safety-related use cases, as manufacturers are less likely to have images highlighting the underlying safety factors. “Technologies like synthetic data alleviate the strain on ML engineers by providing accurately labeled, high-quality data that can account for edge cases that save time, money, and the headache inaccurate data causes,” adds Behzadi.


Five years on: the legacy of GDPR

Five years on, “the European regulation has inspired data protection around the world and many countries have put privacy standards in place. These include countries in South America such as Argentina, Brazil, and Chile, and in Asia, such as Japan and South Korea. In Australia, the Privacy Act has been in place since 1988, but was recently amended to mirror GDPR concepts. GDPR has also had a strong influence in the US where several states introduced data protection legislation, including California with the California Consumer Privacy Act, and Colorado with the Colorado Consumer Protection Act. On a federal level, the draft American Data Privacy and Protection Act is another example of where regulation is heading.” So what impact has it had on how organisations are run and data is handled? Aditya Fotedar, CIO at Tintri, a provider of auto adaptive, workload intelligent platforms, explains that while GDPR has ushered in significant changes, they are built upon existing regulations: “GDPR was a progression on the existing EU privacy laws, main changes being the sub processor contractual clauses, right to forget, and size of the fines. 


Embracing Privacy by Design as a Corporate Responsibility

Companies are increasingly realizing the immense importance of a paradigm shift towards Privacy by Design. This is because this approach significantly reduces the cost of adapting to new legislation, builds consumer trust, and carries fewer risks. Data protection is here to stay, and this is a realization that everyone – from companies to legislators to consumers – is becoming more and more aware of and acting upon. The important thing now is to approach data protection more proactively – and to make it a general corporate responsibility. Data protection rights are also human rights! So far, the advertising industry has viewed data protection as a drag, but this perception will have to change as we move through2023. After all, data protection is no longer a limitation, but a selling point. As a result, industry players are beginning to view it as a worthwhile investment rather than a cost. Companies are doing this proactively because they want to stay competitive and keep their brand privacy-centric, and to ensure that customers continue to trust them. 


4 reasons cloud data repatriation is happening in storage

Moving storage to another location means disconnecting on-site storage resources, such as SANs, NAS devices, RAID equipment, optical storage and other technologies. But how likely is it that an IT department making a push to cloud storage clears out the storage section of its data center and makes constructive use of the newly empty space? Not always likely, and the organization is still paying for every square foot of floor space in that data center. Assuming IT managers performed a careful, phased migration from on site to the cloud, they probably would have analyzed the use of space made available from the migration. If the company owns the displaced storage assets, managers must consider what happens to them after a department or application moves out of the data center. From a business perspective, it may make sense to retain these assets and have them ready for use in an emergency. This approach also ensures that storage resources are available if cloud data repatriation occurs, but it doesn't save space -- or money. Continual advances in computing power can mean that repatriation may not require as much physical space for the same or greater processing speeds and storage capacity.


10 digital transformation questions every CIO must answer

Am I engaging people on the front lines to formulate DX plans? According to Rogers, the answer should be yes. “You need people on the front lines, because it is the business units who have people out there talking to customers every day,” he says, adding that while C-suite support for transformation is crucial, the front-line perspectives offered by lower-tier employees are those that can identify where change is needed and can truly impact the business. ... Am I identifying and using the right business metrics to measure progress? Most CIOs have moved beyond using traditional IT metrics like uptime and application availability to determine whether a tech-driven initiative is successful. Still, there’s no guarantee that CIOs use the most appropriate metrics for measuring progress on a DX program, says Venu Lambu, CEO of Randstad Digital, a digital enablement partner. “It’s important to have the technology KPIs linked to business outcomes,” he explains. If your business wants to have faster time to market, improved customer engagement, or increased customer retention, those are what CIOs should measure to determine success.


Unlocking the Value of Cloud Services in the AI/ML Era

As cloud complexity and maturity grow, the goal for businesses should be more than just “lift and shift’’ scenarios, especially when such migrations can result in higher costs. The key is understanding how to unlock the real value of cloud services to meet specific organizational needs. For example, with a clear view of how a vendor’s PaaS and SaaS strengths map to business objectives, organizations can release new features, cut costs, and gain powerful new capabilities to support long-term outcomes using predefined ML models. Success demands that systems be continually evaluated to seek out iterative improvements not be considered a one-off implementation. After all, technology is constantly evolving so there’s no room to be complacent or ignore the environment in which infrastructure operates. This is where human insight and expertise play a crucial role. For example, consider the matter of determining the right public or private cloud vendors for the business. Companies operating in highly regulated regions will need to consider how a cloud vendor can ensure data is compliant to localized regulations.


Insights from launching a developer-led bank

Traditional banks tend to treat policies as their primary tool for problem-solving. While policies are part of the source code that defines how a business operates, they do not define culture. An organisation’s real culture is found in the values and behaviours of the people who work there - how they interact, how they work towards their goals, and how they handle challenges. Culture is defined by who a company chooses to hire, fire, and promote. ... Unfortunately, traditional banks don't place much emphasis on core values and culture during hiring, preferring to focus solely on qualifications and experience. This is why many banks end up with a culture that is at odds with the one they claim to have - which is both misleading to the outside world and a source of strain and cognitive dissonance internally. Your focus should be on building a culture that goes beyond policy documents. You need a holistic recruitment strategy that assesses the candidate’s core values—how they work with others, their perception of accountability, and whether they display kindness and thoughtfulness. 


How global enterprises navigate the complex world of data privacy

Some of the strategies for balancing the need for personalized data analytics against ethical and legal data privacy responsibilities include:Data minimization: As per the previous response, avoid collecting excessive data that could pose a privacy risk and only collect and use that which is specific to the business objective. Transparency: Be transparent in your policies about what is collected, how it’s collected and how it will be used. Ensure explicit consent from your end users. Strong data governance: Ensure strong oversight in areas not only such as data security, but also privacy by design, customer education, audits and reviews to enable data privacy posture to constantly evolve. The balance between customer analytics and privacy is a delicate one that requires an ongoing commitment to fostering a culture of privacy and respect for data and end users within your organization. ... As AI and machine learning technologies continue to evolve, the challenges include ethical, considerations, bias and legal compliance to name a few but the opportunities are also significant. 


Unmasking the MGM Resorts Cyber Attack: Why Identity-Based Authentication is the Future

As seen from the MGM cyber attack, relying on single-factor authentication is a glaring example of outdated security. This method must be revised today when cyber threats are increasingly sophisticated. Although a step in the right direction, multi-factor authentication can fall short if not implemented correctly. For instance, using easily accessible information as a second factor, like a text message sent to a phone, can be intercepted and exploited. The evolution of security measures has brought us from simple passwords to biometrics and beyond. Yet, many businesses are stuck in the past, relying on these half-measures. It’s not just about keeping up with the times; it’s about safeguarding your organization’s future. One-size-fits-all solutions are ineffective, and risk-based authentication should be the norm, not the exception. ... Security half-measures, like using codes, devices, or unverified biometrics as identity proxies, are more than just weak points; they open doors for cybercriminals. The MGM breach is a stark reminder of the dangers of compromised security. 


Metrics-Driven Developer Productivity Engineering at Spotify

An engineering department could have an OKR on the lagging metric of MTTR and a platform team supporting SREs would have a leading metric of log ingestion speed. These would both be in support of the company-level OKR to increase customer satisfaction, which is measured by things like net promoter scores (NPS), active users and churn rate. This emphasizes one of the important goals of platform engineering which is to increase engineers’ sense of purpose by connecting their work more closely to delivering business value. “Productivity cannot be measured easily. And certainly not with a single accurate number. And probably not even with a few of them. So these metrics about SRE efficiency or developer productivity, they need to be contextualized for your own company, your tech stack, your team even,” he said, emphasizing that the trends are typically more important than the actual values. “That does not mean that we cannot have a productive conversation about them. But it does mean there is no absolute way to measure” developer productivity, knowing that proxy metrics will never capture everything.



Quote for the day:

''A good plan executed today is better than a perfect plan executed tomorrow.'' -- General George Patton

Daily Tech Digest - September 24, 2023

How legacy systems are threatening security in mergers & acquisitions

Legacy systems are far more likely to get hacked. This is especially true for companies that become involved in private equity transactions, such as mergers, acquisitions, and divestitures. These transactions often result in IT system changes and large movements of data and financial capital which leave organizations acutely vulnerable. With details of these transactions being publicized or publicly accessible, threat actors can specifically target companies likely to be involved in such deals. We have seen two primary trends throughout 2023: Threat groups are closely following news cycles, enabling them to quickly target entire portfolios with zero-day attacks designed to upend aging technologies; disrupting businesses and their supply chains; Corporate espionage cases are also on the rise as threat actors embrace longer dwell times and employ greater calculation in methods of monetizing attacks. Together, this means the number of strategically calculated attacks — which are more insidious than hasty smash-and-grabs — are on the rise. 


How Frontend Devs Can Take Technical Debt out of Code

To combat technical debt, developers — even frontend developers — must see their work as a part of a greater whole, rather than in isolation, Purighalla advised. “It is important for developers to think about what they are programming as a part of a larger system, rather than just that particular part,” he said. “There’s an engineering principle, ‘Excessive focus on perfection of art compromises the integrity of the whole.’” That means developers have to think like full-stack developers, even if they’re not actually full-stack developers. For the frontend, that specifically means understanding the data that underlies your site or web application, Purighalla explained. “The system starts with obviously the frontend, which end users touch and feel, and interface with the application through, and then that talks to maybe an orchestration layer of some sort, of APIs, which then talks to a backend infrastructure, which then talks to maybe a database,” he said. “That orchestration and the frontend has to be done very, very carefully.” Frontend developers should take responsibility for the data their applications rely on, he said.


Digital Innovation: Getting the Architecture Foundations Right

While the benefits of modernization are clear, companies don’t need to be cutting edge everywhere, but they do need to apply the appropriate architectural patterns to the appropriate business processes. For example, Amazon Prime recently moved away from a microservices-based architecture for streaming media. In considering the additional complexity of service-oriented architectures, the company decided that a "modular monolith” would deliver most of the benefits for much less cost. Companies that make a successful transition to modern enterprise architectures get a few things right. ... Enterprise technology architecture isn’t something that most business leaders have had to think about, but they can’t afford to ignore it any longer. Together with the leaders of the technology function, they need to ask whether they have the right architecture to help them succeed. Building a modern architecture requires ongoing experimentation and a commitment to investment over the long term.


GenAI isn’t just eating software, it’s dining on the future of work

As we step into this transformative era, the concept of “no-collar jobs” takes center stage. Paul introduced this idea in his book “Human + Machine,” where new roles are expected to emerge that don’t fit into the traditional white-collar or blue-collar jobs; instead, it’s giving rise to what he called ‘no-collar jobs.’ These roles defy conventional categories, relying increasingly on digital technologies, AI, and automation to enhance human capabilities. In this emergence of new roles, the only threat is to those “who don’t learn to use the new tools, approaches and technologies in their work.” While this new future involves a transformation of tasks and roles, it does not necessitate jobs disappearing. ... Just as AI has become an integral part of enterprise software today, GenAI will follow suit. In the coming year, we can expect to see established software companies integrating GenAI capabilities into their products. “It will become more common for companies to use generative AI capabilities like Microsoft Dynamics Copilot, Einstein GPT from Salesforce or, GenAI capabilities from ServiceNow or other capabilities that will become natural in how they do things.”


The components of a data mesh architecture

In a monolithic data management approach, technology drives ownership. A single data engineering team typically owns all the data storage, pipelines, testing, and analytics for multiple teams—such as Finance, Sales, etc. In a data mesh architecture, business function drives ownership. The data engineering team still owns a centralized data platform that offers services such as storage, ingestion, analytics, security, and governance. But teams such as Finance and Sales would each own their data and its full lifecycle (e.g. making code changes and maintaining code in production). Moving to a data mesh architecture brings numerous benefits:It removes roadblocks to innovation by creating a self-service model for teams to create new data products: It democratizes data while retaining centralized governance and security controls; It decreases data project development cycles, saving money and time that can be driven back into the business. Because it’s evolved from previous approaches to data management, data mesh uses many of the same tools and systems that monolithic approaches use, yet exposes these tools in a self-service model combining agility, team ownership, and organizational oversight.


Six major trends in data engineering

Some modern data warehouse solutions, including Snowflake, allow data providers to seamlessly share data with users by making it available as a feed. This does away with the need for pipelines, as live data is shared in real-time without having to move the data. In this scenario, providers do not have to create APIs or FTPs to share data and there is no need for consumers to create data pipelines to import it. This is especially useful for activities such as data monetisation or company mergers, as well as for sectors such as the supply chain. ... Organisations that use data lakes to store large sets of structured and semi-structured data are now tending to create traditional data warehouses on top of them, thus generating more value. Known as a data lakehouse, this single platform combines the benefits of data lakes and warehouses. It is able to store unstructured data while providing the functionality of a data warehouse, to create a strategic data storage/management system. In addition to providing a data structure optimised for reporting, the data lakehouse provides a governance and administration layer and captures specific domain-related business rules.


From legacy to leading: Embracing digital transformation for future-proof growth

Digital transformation without a clear vision and roadmap is identified as a big reason for failure. Several businesses may adopt change because of emerging trends and rapid innovation without evaluating their existing systems or business requirements. To avoid such failure, every tech leader must develop a clear vision, and comprehensive roadmap aligned with organizational goals, ensuring each step of the transformation contributes to the overarching vision. ... The rapid pace of technological change often outpaces the availability of skilled professionals. In the meantime, tech leaders may struggle to find individuals with the right expertise to drive the transformation forward. To address this, businesses should focus on strategic upskilling using IT value propositions and hiring business-minded technologists. Furthermore, investing in individual workforce development can bridge this gap effectively. ... Many organizations grapple with legacy systems and outdated infrastructure that may not seamlessly integrate with modern digital solutions. 


7 Software Testing Best Practices You Should Be Talking About

What sets the very best testers apart from the pack is that they never lose sight of why they’re conducting testing in the first place, and that means putting user interest first. These testers understand that testing best practices aren’t necessarily things to check off a list, but rather steps to take to help deliver a better end product to users. The very best testers never lose sight of why they’re conducting testing in the first place, and that means putting user interest first. To become such a tester, you need to always consider software from the user’s perspective and take into account how the software needs to work in order to deliver on the promise of helping users do something better, faster and easier in their daily lives. ... In order to keep an eye on the bigger picture and test with the user experience in mind, you need to ask questions and lots of them. Testers have a reputation for asking questions, and it often comes across as them trying to prove something, but there’s actually an important reason why the best testers ask so many questions.


Why Data Mesh vs. Data Lake Is a Broader Conversation

Most businesses with large volumes of data use a data lake as their central repository to store and manage data from multiple sources. However, the growing volume and varied nature of data in data lakes makes data management challenging, particularly for businesses operating with various domains. This is where a data mesh approach can tie in to your data management efforts. The data mesh is a microservice, distributed approach to data management whereby extensive organizational data is split into smaller, multiple domains and managed by domain experts. The value provided by implementing a data mesh for your organization includes simpler management and faster access to your domain data. By building a data ecosystem that implements a data lake with data mesh thinking in mind, you can grant every domain operating within your business its product-specific data lake. This product-specific data lake helps provide cost-effective and scalable storage for housing your data and serving your needs. Additionally, with proper management by domain experts like data product owners and engineers, your business can serve independent but interoperable data products.


The Hidden Costs of Legacy Technology

Maintaining legacy tech can prove to be every bit as expensive as a digital upgrade. This is because IT staff have to spend time and money to keep the obsolete software functioning. This wastes valuable staff hours that could be channeled into improving products, services, or company systems. A report from Dell estimates that organizations currently allocate 60-80% of their IT budget to maintaining existing on-site hardware and legacy apps, which leaves only 20-40% of the budget for everything else. ...  No company can defer upgrading its tech indefinitely: sooner or later, the business will fail as its rivals outpace it. Despite this urgency, many business leaders mistakenly believe that they can afford to defer their tech improvements and rely on dated systems in the meantime. However, this is a misapprehension and can lead to ‘technical debt.’ ‘Technical debt' describes the phenomenon in which the use of legacy systems defers short-term costs in favor of long-term losses that are incurred when reworking the systems later on. 



Quote for the day:

"Always remember, your focus determines your reality." -- George Lucas

Daily Tech Digest - September 23, 2023

A CISO’s First 90 Days: The Ultimate Action Plan and Advice

It’s a CISOs responsibility to establish a solid security foundation as rapidly as possible, and there are many mistakes that can be made along the way. This is why the first 90 days are the most important for new CISOs. Without a clear pathway to success in the early months, CISOs can lose confidence in their ability as change agents and put their entire organization at risk of data theft and financial loss. No pressure! Here’s our recommended roadmap for CISOs in the first 90 days of a new role. ... This means they can reduce the feeling of overwhelm and work strategically toward business goals. For a new CISO, it can be challenging trying to locate and classify all the sensitive data across an organization, not to mention ensuring that it’s also safe from a variety of threats. Data protection technology is often focused on perimeters and endpoints, giving internal bad actors the perfect opportunity to slip through any security gaps in files, folders, and devices. For large organizations, it’s practically impossible to audit data activity at scale without a robust DSPM.


There’s No Value in Observability Bloat. Let’s Focus on the Essentials

Telemetry data gathered from the distributed components of modern cloud architectures needs to be centralized and correlated for engineers to gain a complete picture of their environments. Engineers need a solution with critical capabilities such as dashboarding, querying and alerting, and AI-based analysis and response, and they need the operation and management of the solution to be streamlined. What’s important for them to know is that it’s not necessary to spend more to ensure peak performance and visibility as their environmental complexity grows. ... No doubt, more data is being generated, but most of it is not relevant or valuable to an organization. Observability can be optimized to bring greater value to customers, and that’s where the market is headed. Call it “essential observability.” It’s a disruptive vision to propose a re-architected approach to observability, but what engineers need is a new approach making it easier to surface insights from their telemetry data while deprioritizing low-value data. Costs can be reduced by consuming only the data that enables teams to maintain performance and drive smart business decisions.


Shedding Light on Dark Patterns in FinTech: Impact of DPDP Act

In practice, these patterns exploit human psychology and trick people into making unwanted choices/ purchases. It has become a menace for the FinTech industry. These patterns are used to encourage people to sign up for loans, credit cards, and other financial products that they may not need or understand. However, the new Digital Personal Data Protection Act, 2023 (“DPDP Act”), can be used to bring such dark patterns under control. The DPDP Act requires online platforms to seek consent of Data Principals through clear, specific and unambiguous notice before processing any data. Further, the Act empowers individuals to retract/ withdraw consent to any agreement at any juncture.  ... Companies will need to review their user interfaces and remove any dark patterns that they are using and protect the personal data and use the data for ‘legitimate purposes’ only and take consent from users, through clear affirmative action, in unambiguous terms. They will also need to develop new ways to promote their products and services without relying on deception.


Can business trust ChatGPT?

It might seem premature to worry about trust when there is already so much interest in the opportunities Gen AI can offer. However, it needs to be recognized that there’s also an opportunity cost — inaccuracy and misuse could be disastrous in ways organizations can’t easily anticipate. Up until now, digital technology has been traditionally viewed as being trustworthy in the sense that it is seen as being deterministic. Like an Excel formula, it will be executed in the same manner 100% percent of the time, leading to a predictable, consistent outcome. Even when the outcome yields an error — due to implementation issues, changes in the context in which it has been deployed, or even bugs and faults — there is nevertheless a sense that technology should work in a certain way. In the case of Gen AI, however, things are different; even the most optimistic hype acknowledges that it can be unpredictable, and its output is often unexpected. Trust in consistency seems to be less important than excitement at the sheer range of possibilities Gen AI can deliver, seemingly in an instant.


A Few Best Practices for Design and Implementation of Microservices

The first step is to define the microservices architecture. It has to be established how the services will interact with each other before a company attempts to optimise their implementation. Once microservices architecture gets going, we must be able to optimise the increase in speed. It is better to start with a few coarse-grained but self-contained services. Fine graining can happen as the implementation matures over time. The developers, operations team, and testing fraternity may have extensive experience in monoliths, but a microservices-based system is a new reality; hence, they need time to cope with this new shift. Do not discard the monolithic application immediately. Instead, have it co-exist with the new microservices, and iteratively deprecate similar functionalities in the monolithic application. This is not easy and requires a significant investment in people and processes to get started. As with any technology, it is always better to avoid the big bang approach, and identify ways to get the toes wet before diving in head first.


Bridging Silos and Overcoming Collaboration Antipatterns in Multidisciplinary Organisations

Collaboration is at the heart of teamwork. Many modern organisations set up teams to be cross-functional or multidisciplinary. Multidisciplinary teams are made up of specialists from different disciples collaborating together daily towards a shared outcome. They have the roles needed to design, plan, deliver, deploy and iterate a product or service. Modern approaches and frameworks often focus on increasing flow and reducing blockers, and one way to do this is to remove the barrier between functions. However, as organisations grow in size and complexity, they look for different ways of working together, and some of these create collaboration anti-patterns. Three of the most common antipatterns I see and have named here are: One person split across multiple teams; Product vs. engineering wars; and X-led organisations,


The Rise of the Malicious App

Threat actors have changed the playing field with the introduction of malicious apps. These applications add nothing of value to the hub app. They are designed to connect to a SaaS application and perform unauthorized activities with the data contained within. When these apps connect to the core SaaS stack, they request certain scopes and permissions. These permissions then allow the app the ability to read, update, create, and delete content. Malicious applications may be new to the SaaS world, but it's something we've already seen in mobile. Threat actors would create a simple flashlight app, for example, that could be downloaded through the app store. Once downloaded, these minimalistic apps would ask for absurd permission sets and then data-mine the phone. ... Threat actors are using sophisticated phishing attacks to connect malicious applications to core SaaS applications. In some instances, employees are led to a legitimate-looking site, where they have the opportunity to connect an app to their SaaS. In other instances, a typo or slightly misspelled brand name could land an employee on a malicious application's site. 


What Is GreenOps? Putting a Sustainable Focus on FinOps

If the future of cloud sustainability appears bleak, Arora advises looking to examples of other tech advancements and the curve of their development, where early adopters led the way and then the main curve eventually followed. “The same thing happened with electric cars,” Arora points out. “They didn’t enter the mainstream because they were better for the environment; they entered the mainstream because the cost came down.” And this is what he predicts will happen with cloud sustainability. Right now, the early adopters are stepping forward and championing GreenOps as a part of the FinOps equation. In a few years, others will be able to measure their data, analyze how they reduced their carbon impact and what effect it had on cloud spending and savings, and then follow their lead. It’s naive to think that most companies will go out of their way (and perhaps even increase their cloud spending) to reduce their carbon footprint. 


The Growing Importance of AI Governance

As AI systems become more powerful and complex, businesses and regulatory agencies face two formidable obstacles:The complexity of the systems requires rule-making by technologists rather than politicians, bureaucrats, and judges. The thorniest issues in AI governance involve value-based decisions rather than purely technical ones. An approach based on regulatory markets has been proposed that attempts to bridge the divide between government regulators who lack the required technical acumen and technologists in the private sector whose actions may be undemocratic. The technique adopts an outcome-based approach to regulation in place of the traditional reliance on prescriptive command-and-control rules. AI governance under this model would rely on licensed private regulators charged with ensuring AI systems comply with outcomes specified by governments, such as preventing fraudulent transactions and blocking illegal content. The private regulators would also be responsible for the safe use of autonomous vehicles, use of unbiased hiring practices, and identification of organizations that fail to comply with the outcome-based regulations.


Legal Issues for Data Professionals

Lawyers identify risks data professionals may not know they have. Moreover, because data is a new field of law, lawyers need to be innovative in creating legal structures in contracts to allow two or more parties to achieve their goals. For example, there are significant challenges attempting to apply the legal techniques traditionally used with other classes of business assets (such as intellectual property, real property, and corporate physical assets) to data as a business asset class. Because the old legal techniques do not fit well, lawyers and their clients need to develop new ways of handling the business and legal issues that arise, and in so doing, invent new legal structures that meet the specific attributes of data that differentiate data from other business assets. To take one example, using software agreements as a template for data transactions will not always work because the IP rights for software do not align with data, the concept of software deliverables and acceptance testing is not a good fit, and the representations and warranties are both over and underinclusive. 



Quote for the day:

"Rarely have I seen a situation where doing less than the other guy is a good strategy." -- Jimmy Spithill

Daily Tech Digest - September 22, 2023

HR Leaders’ strategies for elevating employee engagement in global organisations

In the age of AI, HR technologies have emerged as powerful tools for enhancing employee engagement by streamlining HR processes, improving communication, and personalising the employee experience. Sreedhara added “By embracing HR Tech, we can enhance the employee experience by reducing administrative burdens, improving access to information, and enabling employees to focus on more meaningful aspects of their work. Moreover, these technologies can contribute to greater employee engagement. Enhancing employee experience via HR tech and tools can improve efficiency, and empower employees to take more control of their work-related tasks. We have also enabled some self-service technologies like: Employee portal that serves all HR-related tasks, and access to policies and processes across the employee life cycle - Onboarding, performance management, benefits enrolment, and expense management;  Employee feedback and surveys; Databank for predictive analysis (early warning systems) and manage employee engagement.”


Bolstering enterprise LLMs with machine learning operations foundations

Risk mitigation is paramount throughout the entire lifecycle of the model. Observability, logging, and tracing are core components of MLOps processes, which help monitor models for accuracy, performance, data quality, and drift after their release. This is critical for LLMs too, but there are additional infrastructure layers to consider. LLMs can “hallucinate,” where they occasionally output false knowledge. Organizations need proper guardrails—controls that enforce a specific format or policy—to ensure LLMs in production return acceptable responses. Traditional ML models rely on quantitative, statistical approaches to apply root cause analyses to model inaccuracy and drift in production. With LLMs, this is more subjective: it may involve running a qualitative scoring of the LLM’s outputs, then running it against an API with pre-set guardrails to ensure an acceptable answer. Governance of enterprise LLMs will be both an art and science, and many organizations are still understanding how to codify them into actionable risk thresholds. 


Reimagining Application Development with AI: A New Paradigm

AI-assisted pair programming is a collaborative coding approach where an AI system — like GitHub Copilot or TestPilot — assists developers during coding. It’s an increasingly common approach that significantly impacts developer productivity. In fact, GitHub Copilot is now behind an average of 46 percent of developers’ code and users are seeing 55 percent faster task completion on average. For new software developers, or those interested in learning new skills, AI-assisted pair programming are training wheels for coding. With the benefits of code snippet suggestions, developers can avoid struggling with beginner pitfalls like language syntax. Tools like ChatGPT can act as a personal, on-demand tutor — answering questions, generating code samples, and explaining complex code syntax and logic. These tools dramatically speed the learning process and help developers gain confidence in their coding abilities. Building applications with AI tools hastens development and provides more robust code. 


Don't Let AI Frenzy Lead to Overlooking Security Risks

"Everybody is talking about prompt injection or backporting models because it is so cool and hot. But most people are still struggling with the basics when it comes to security, and these basics continue to be wrong," said John Stone - whose title at Google Cloud is "chaos coordinator" - while speaking at Information Security Media Group's London Cybersecurity Summit. Successful AI implementation requires a secure foundation, meaning that firms should focus on remediating vulnerabilities in the supply chain, source code, and larger IT infrastructure, Stone said. "There are always new things to think about. But the older security risks are still going to happen. You still have infrastructure. You still have your software supply chain and source code to think about." Andy Chakraborty, head of technology platforms at Santander U.K., told the audience that highly regulated sectors such as banking and finance must especially exercise caution when deploying AI solutions that are trained on public data sets.


The second coming of Microsoft's do-it-all laptop is more functional than ever

Microsoft's Surface Laptop Studio 2 is really unlike any other laptop on the market right now. The screen is held up by a tiltable hinge that lets it switch from what I'll call "regular laptop mode" to stage mode (the display is angled like the image above) to studio mode (the display is laid flat, screen-side up, like a tablet). The closest thing I can think of is, well, the previous Laptop Studio model, which fields the same shape-shifting form factor. But after today, if you're the customer for Microsoft's screen-tilting Surface device, then your eyes will be all over the latest model, not the old. That's a good deal, because, unlike the predecessor, the new Surface Laptop Studio 2 features an improved 13th Gen Intel Core H-class processor, NVIDIA's latest RTX 4050/4060 GPUs, and an Intel NPU on Windows for video calling optimizations (which never hurts to have). Every Microsoft expert on the demo floor made it clear to me that gaming and content creation workflows are still the focus of the Studio laptop, so the changes under the hood make sense.


Why more security doesn’t mean more effective compliance

Worse, the more tools there are to manage, the harder it might be to prove compliance with an evolving patchwork of global cybersecurity rules and regulations. That’s especially true of legislation like DORA, which focuses less on prescriptive technology controls and more on providing evidence of why policies were put in place, how they’re evolving, and how organizations can prove they’re delivering the intended outcomes. In fact, it explicitly states that security and IT tools must be continuously monitored and controlled to minimize risk. This is a challenge when organizations rely on manual evidence gathering. Panaseer research reveals that while 82% are confident they’re able to meet compliance deadlines, 49% mostly or solely rely on manual, point-in-time audits. This simply isn’t sustainable for IT teams, given the number of security controls they must manage, the volume of data they generate, and continuous, risk-based compliance requirements. They need a more automated way to continuously measure and evidence KPIs and metrics across all security controls.


EU Chips Act comes into force to ensure supply chain resilience

“With the entry into force today of the European Chips Act, Europe takes a decisive step forward in determining its own destiny. Investment is already happening, coupled with considerable public funding and a robust regulatory framework,” said Thierry Breton, commissioner for Internal Market, in comments posted alongside the announcement. “We are becoming an industrial powerhouse in the markets of the future — capable of supplying ourselves and the world with both mature and advanced semiconductors. Semiconductors that are essential building blocks of the technologies that will shape our future, our industry, and our defense base,” he said. The European Union’s Chips Act is not the only government-backed plan aimed at shoring up domestic chip manufacturing in the wake of the supply chain crisis that has plagued the semiconductor industry in recent years. In the past year, the US, UK, Chinese, Taiwanese, South Korean, and Japanese governments have all announced similar plans.


Microsoft Copilot Brings AI to Windows 11, Works Across Multiple Apps and Your Phone

With Copilot, it's possible to ask the AI to write a summary of a book in the middle of a Word document, or to select an image and have the AI remove the background. In one example, Microsoft showed a long email and demonstrated that when you highlight the text, Copilot appears so you can ask it questions related to the email. And that information can be cross-referenced to information found online, such as asking Copilot for lunch spots nearby based on the email's content. Copilot will be available on the Windows 11 desktop taskbar, making it instantly available at one click. Microsoft says that whether you're using Word, PowerPoint or Edge, you can call on Copilot to assist you with various tasks. It can also be called on via voice. Copilot can connect to your phone, so, for example, you can ask it when your next flight is and it'll look through your text messages and find the necessary information. Edge, Microsoft's web browser, will also have Copilot integrations. 


What Are the Biggest Lessons from the MGM Ransomware Attack?

Ransomware groups increasingly focus on branding and reputation, according to Ferhat Dikbiyik, head of research at third-party risk management software company Black Kite. “When ransomware first made its appearance, the attacks were relatively unsophisticated. Over the years, we have observed a marked elevation in their capabilities and tactics,” he tells InformationWeek in a phone interview. ... The group also called out: “The rumors about teenagers from the US and UK breaking into this organization are still just that -- rumors. We are waiting for these ostensibly respected cybersecurity firms who continue to make this claim to start providing solid evidence to support it.” Dikbiyik also notes that ransomware groups’ more nuanced selection of targets is an indication of increased professionalism. “These groups are doing their homework. They have resources. They acquire intelligence tools…they try to learn their targets,” he says. While ransomware is lucrative, money isn’t the only goal. Selecting high-profile targets, such as MGM, helps these groups to build a reputation, according to Dikbiyik.


A Dimensional Modeling Primer with Mark Peco

“Dimensional models are made up of two elements: facts and dimensions,” he explained. “A fact quantifies a property (e.g., a process cost or efficiency score) and is a measurement that can be captured at a point in time. It’s essentially just a number. A dimension provides the context for that number (e.g., when it was measured, who was the customer, what was the product).” It’s through combining facts and dimensions that we create information that can be used to answer business questions, especially those that relate to process improvement or business performance, Peco said. Peco went on to say that one of the biggest challenges he sees with companies using dimensional models is with integrating the potentially huge number of models into one coherent picture of the business. “A company has many, many processes,” he said, “and each requires its own dimensional model, so there has to be some way of joining these models together to give a complete picture of the organization.” 



Quote for the day:

"Things work out best for those who make the best of how things work out." -- John Wooden

Daily Tech Digest - September 21, 2023

6 deadly sins of enterprise architecture

The simplest way to build out enterprise software is to leverage the power of various tools, portals, and platforms constructed by outsiders. Often 90%+ of the work can be done by signing a purchase order and writing a bit of glue code. But trusting the key parts of an enterprise to an outside company has plenty of risks. Maybe some private equity firm buys the outside firm, fires all the good workers, and then jacks up the price knowing you can’t escape. Suddenly instantiating all your eggs in one platform starts to fail badly. No one remembers the simplicity and consistency that came from a single interface from a single platform. Spreading out and embracing multiple platforms, though, can be just as painful. The sales team may promise that the tools are designed to interoperate and speak industry standard protocols, but that gets you only halfway there. Each may store the data in an SQL database, but some use MySQL, others use PostgreSQL and others use Oracle. There’s no simple answer. Too many platforms creates a Tower of Babel. Too few brings the risk of vendor lock-in and all the pain of opening that email with the renewal contract. 


Manufacturing firms make early bets on the industrial metaverse

The building blocks of the industrial metaverse are “frequently proprietary, siloed and standalone,” according to a recent report by Miller and Forrester colleagues. Digital twins — which might use IoT sensor data and 3D modelling to provide a real-time picture of a piece of equipment or factory, for example — are perhaps closest to realization, but are still limited in some senses. “The reality today is that most digital twins are still asset- and vendor-specific,” Miller told Computerworld, with the same manufacturer responsible for both hardware and software. For example, an ABB robot may be sold with an ABB digital twin, or a Siemens motor will come with a Siemens digital twin — but getting them to work together can be a challenge. While these types of tools offer clear benefits for customers, firms that own multiple products from multiple vendors will eventually want “one digital twin of how the factory or the line is operating, not 100 digital twins of the different components,” said Miller. Even the most advanced precursor technologies, such as factory-spanning digital twins, tend to be the product of a partnership with one vendor.


How businesses can vet their cybersecurity vendors

Companies can’t assume that the vendor is telling the truth. Particularly in the authentication market, where there is currently no standardised testing to confirm solutions pass metrics such as ‘phishing resistance’. When talking to a vendor, whilst it may seem simple, the organisation should first ask the vendor: How does the tool prevent social engineering and AiTM attacks? Whilst some solutions might say passwordless or ‘phishing-resistant’, they could instead simply hide the password so that authentication is more convenient, but the vulnerability remains. The team needs to determine if the solution eliminates passwords from both the authentication flow and account recovery flow, should the user lose their typical login device. And the tool must implement “verifier impersonation protection” to thwart AiTM/proxy-based attacks. Getting the security team to conduct their research beforehand enables them to come prepared to ask detailed questions and can help bypass the buzzwords that vendors use to uncover the truth. To go a step further, vetting the vendor can allow security teams to learn more about the tool and uncover the truth.


Hidden dangers loom for subsea cables, the invisible infrastructure of the internet

Subsea cables can fall under a wide range of regulatory regimes, laws and authorities. At national level, there may be several authorities involved in their protection, including national telecom authorities, authorities under the NIS Directive, cybersecurity agencies, national coastguard, military, etc. There are also international treaties in place to be considered, establishing universal norms and the legal boundaries of the sea. ... Challenges for subsea cable resilience: Accidental, unintentional damage through fishing or anchoring has so far been the cause of most subsea cable incidents; Natural phenomena such as undersea earthquakes or landslides can have a significant impact, especially in places where there is a high concentration of cables; Chokepoints, where many cables are installed close to each other, are single points of failure, where one physical attack could strain the cable repair capacity; Physical attacks and cyberattacks should be considered as threats for the subsea cables, the landing points, and the ICT at the landing points.


Datacentre operators ‘hesitant’ over how to proceed with server farm builds as AI hype builds

“The developments in generative AI and the increasing use of a wide range of AI-based applications in datacentres, edge infrastructure and endpoint devices require the deployment of high-performance graphics processing units and optimised semiconductor devices,” said Alan Priestley, vice-president analyst at Gartner. “This is driving the production and deployment of AI chips.” And while Gartner’s figures suggest the AI trend is going to continue to take the world of tech by storm, the market watcher’s recently published Hype Cycle for emerging technologies lists generative AI as being at the “peak of inflated expectations”, which might go some way to explaining why operators are reluctant to rush to kit out their sites to accommodate this trend. For colocation operators that are targeting hyperscale cloud firms, many of which regularly talk up the potential for generative AI to transform how enterprises operate, there is perhaps less reticence, said Onnec’s Linqvist.


Developers: Is Your API Designed for Attackers?

The security firm analyzed 40 public breaches to see what role APIs played in security problems, which Snyder featured in his 2023 Black Hat conference presentation. The issue might be built-in vulnerabilities, misconfigurations in the API, or even a logical flaw in the application itself — and that means it falls on developers to fix it, Snyder said. “It’s a range of things, but it is generally with their own APIs,” Snyder told The New Stack. ”It is in their domain of influence, and honestly, their domain of control, because it is ultimately down to them to build a secure API.” The number of breaches analyzed is small — it was limited to publicly disclosed breaches — but Snyder said the problem is potentially much more pervasive. ... In the last couple of months, he said, security researchers who work on this space have uncovered billions of records that could have been breached through poor API design. He pointed to the API design flaws in basically every full-service carrier’s frequent flyer program, which could have exposed entire datasets or allowed for the awarding of unlimited miles and hotel points.


Rethinking Cybersecurity: The Power of the Hacker Mindset

Embracing a hacker mindset involves adopting an external viewpoint of your business to uncover vulnerabilities before they’re exploited. This includes embracing practices like ethical hacking and penetration testing. While forming a specialised ethical hacking team is an option, embedding this mindset within cyber teams and your wider business is equally effective. Key to this transformation is upskilling. Businesses should be offering training to encourage creative thinking when it comes to cybersecurity. Instead of waiting for breaches to learn from mistakes, being proactive is crucial. Regular, monthly upskilling for cybersecurity and IT teams, rather than every six months or even a year, keeps them on the front foot. Encouraging a hacking mindset also shouldn’t be confined to cyber experts; all employees should undergo cyber awareness training. In this fight, businesses and individuals aren’t alone. Numerous training platforms are available, but choosing those that concentrate on providing practical, hands-on skills rooted in real-world attack scenarios is essential. 


How to get started with prompt engineering

Joseph Reeve leads led a team of people working on features that require prompt engineering at Amplitude, a product analytics software provider. He has also built internal tooling to make it easier to work with LLMs. That makes him a seasoned professional in this emerging space. As he notes, "the great thing about LLMs is that there’s basically no hurdle to getting started—as long as you can type!" If you want to assess someone's prompt engineering advice, it's easy to test-drive their queries in your LLM of choice. Likewise, if you're offering prompt engineering services, you can be sure your employers or clients will be using an LLM to check your results. So the question of how you can learn about prompt engineering—and market yourself as a prompt engineer—doesn't have a simple, set answer, at least not yet. "We're definitely in the 'wild west' period," says AIPRM's King. "Prompt engineering means a lot of things to different people. To some it's just writing prompts. To others it's fine-tuning and configuring LLMs and writing prompts.


Australia’s new cybersecurity strategy: Build “cyber shields” around the country

The first shield proposes a long-term education of citizens and businesses so by 2030 they understand cyberthreats and how to protect themselves. This "shield" comes with a plan B that plans for citizens and businesses to have proper supports in place so that when they are the victim of cyber-attack, they're able to get back up off the mat very quickly. The second shield is for safer technology. The federal government will have software treated like any other consumer product that is deemed insecure. "So, in 2030 our vision for safe technology is a world where we have clear global standards for digital safety in products that will help us drive the development of security into those products from their very inception," O'Neil said. ... The fourth proposed shield will focus on protecting Australian's access to critical infrastructure, with the Home Affairs and Cybersecurity minister saying that "part of this year will be about government lifting up its own cyber defences to make sure we're protecting our country."


Modeling Asset Protection for Zero Trust – Part 2

The goal when modeling the data environment for a Zero Trust initiative is to have the information available to decide what data should be available when, where, and by whom. That requires you to know what data you have, its value to the business, and the risk level if lost. The information is used to inform an automated rules engine that enforces governance based on the state of the data request journey. It is not to define or modify a data model. Hopefully, you already have this information catalogued. From a digital asset perspective, most companies think of their data as their crown jewels so the data pillar might be the most important pillar. One challenge with data is that applications supply data access. Many applications are not written to support modern authentication mechanisms and don’t handle the protocols needed to integrate with contemporary data environments so the applications might not support a Zero Trust data model. Hopefully, you’re already experimenting with current mechanisms for your microservice environment. But, if not, as with any elephant, you eat it one bite at a time.



Quote for the day:

"Your time is limited, so don't waste it living someone else's life." -- Steve Jobs

Daily Tech Digest - September 20, 2023

Innovation needs to be a culture, not just a practice

It is important to build open organisational structures that let teams avoid obstacles and hierarchies that frequently stifle creativity. An inventive culture places a strong emphasis on being flat and agile. Employees are more able to freely communicate their thoughts when they have direct access to decision-makers. The well-known sportswear company Nike is one example of this. All levels of staff members are welcome to work together on cutting-edge concepts and technologies at the company's "Innovation Kitchen." This open mindset has produced ground-breaking goods like the Nike Flyknit, which transformed the athletic footwear market. ... Most businesses have started encouraging the participation of employees across sectors in brainstorming sessions to think outside the box because they respect unusual thinking and believe there are no negative ideas. But in some circumstances, one should be ready to also support the genuinely absurd. Innovation requires a space where creativity can thrive.


Quantum Plus AI Widens Cyberattack Threat Concerns

The mind-boggling speed of quantum computing is a double-edged sword, however. On one hand, it helps solve difficult mathematical problems much faster. On the other, it would increase the cyberattack capabilities beyond comprehension. “When you marry quantum computing and AI together, you can have an exponential increase in the advantages that both can offer,” said Dana Neustadter, director of product management for security IP at Synopsys. “Quantum computing will be able to enhance AI accuracy, speed, and efficiency. Enhancing AI can be a game changer for the better for many reasons. Paired with quantum computing, AI will have greater ability to solve very complex problems. As well, it will analyze huge amounts of data needed to take decisions or make predictions more quickly and accurately than conventional AI.” Very efficient and resilient solutions for threat detection and secure management can be created with enhanced AI, transforming cybersecurity as we know it today. “However, if used for the wrong reasons, these powerful technologies also can threaten cybersecurity,” Neustadter said.


IoT startups fill security gaps

Insider risks have long been one of the most difficult cybersecurity threats to mitigate. Not only can power users, such as C-level executives, overrule IT policies, but partners and contractors often get streamlined access to corporate resources, and may unintentionally introduce risks in the name of expediency. As IoT continues to encompass such devices as life-saving medical equipment and self-driving vehicles, even small risks can metastasize into major security incidents. For San Francisco-based self-driving car startup Cruise, a way to mitigate the many risks associated with connected cars is to conduct thorough risk assessments of partners and suppliers. The trouble is that third-party assessments were such a time-consuming and cumbersome chore that the existing process was not able to scale as the company grew. “The rise in cloud puts a huge stress on understanding the risk posture of our partners. That is a complex and non-trivial thing. Partnerships are always under pressure,” said Alexander Hughes, Director of Information Security, Trust, and Assurance at Cruise.


Expert: Keep Calm, Avoid Overhyping China's AI Capabilities

"Some of China's bottlenecks relate to a reliance on Western companies to open up new paradigms, China's censorship regime, and computing power bottlenecks," Ding said. "I submitted three specific policy recommendations to the committee, but I want to emphasize one, which is, 'Keep calm and avoid overhyping China's AI capabilities.'" Policymakers also erroneously think anything that helps China around artificial intelligence is going to hurt the U.S. even though giants in China's AI industry like ByteDance, Alibaba and Baidu end up generating a lot of profits that come back into the U.S. economy and hopefully get reinvested into American productivity, according to Ding. "It's a more difficult question than just, 'Any investment in China's AI sector means it's harmful to U.S. national security,'" Ding said. "Continuing to maintain the openness of these global innovation networks is always going to favor the U.S. in the long-run in terms of our ability to run faster."


Beyond Spreadsheets: How Data-Driven Organizations Outperform the Rest

Creating a data-driven culture must start at the executive level to drive the understanding that data is central to the operations and success of your organization, as well as to decision-making at every level. It begins with communicating the importance of data, making it a corporate initiative. From there must follow implementing the data infrastructure and analytics tools that enable every role to get the data needed to drive evidenced-based decision-making. There is no right or wrong organizational structure to create a data-driven culture. Still, creating and assigning roles and responsibilities that will work for your organization, and then staffing and training accordingly, are essential. You may choose to train most of your staff to understand and support analytics, or you may rely on a few for performing analytics while conveying across your organization the overall importance and requirements of using data and analytics to drive desired results. 


Modeling Asset Protection for Zero Trust – Part 1

For operating your IT environment, the Security, Information, and Event Management (SIEM) system must be a good fit for the infrastructure. Once you have a complete inventory of your infrastructure, we recommend you complete an architectural-level evaluation of your SIEM to ensure good alignment. ... The evaluation should include the cost of setup and three years of operations, evaluation of organizational competence and available training for each, and the features of each against your IT landscape. As you evaluate your SIEM environment, consider evaluating your Extended Detection and Response (XDR) capability and performing a similar architectural evaluation. You might consider this part of your SIEM solution or treat it separately and it might be operated by a separate group. XDR also might not fit well into any pillar evaluation so could be overlooked if not captured here. Zero Trust requires identification and valuation of all information technology (IT) assets, automated enforcement of governance, and automated detection, response, and remediation to threats and attacks.


Data Engineer vs. Data Analyst

Data engineers play a pivotal role in establishing and maintaining robust Data Governance practices. They are responsible for designing and implementing data pipelines, ensuring that data is collected, stored, and processed accurately. By implementing rigorous quality checks during the extract, Transform, load (ETL) process, they guarantee that the data is clean and reliable for analysis. On the other hand, data analysts rely on high-quality and trustworthy data to derive meaningful insights. They work closely with the data engineer to define standards for data collection, storage, and usage. ... So, a crucial similarity between data engineers and data analysts is their shared emphasis on teamwork and collaboration. Both roles recognize that combining their expertise can lead to more accurate insights and better decision-making. Moreover, teamwork enables knowledge sharing between data engineers and analysts. They can exchange ideas, techniques, and best practices, enhancing their individual skill sets while collectively driving innovation in Data Management and analysis.


What AppSec and developers working in cloud-native environments need to know

With the emergence of IaaS, PaaS, and IaaS models, the definition of an application extends to include the associated runtime environment and the underlying infrastructure. Applications are now not just bundles of code, but holistic systems that include the virtualized hardware resources, operating systems, databases, and network configurations they rely on. The advent of microservices and containerization, where an application can consist of many independently deployable components each running in its own environment, further complexifies this definition. In a cloud-native application, each microservice with its code, dependencies, and environment could be considered an “application” in its own right. The introduction of Infrastructure as Code (IaC) has further complicated the definition of applications. IaC is the practice of managing and provisioning infrastructure through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools.


Could the DOJ’s Antitrust Trial vs Google Drive More Innovation?

The thought process among regulators, he says, might be that the antitrust case against Microsoft brought about change and created opportunities for more competition -- a similar attempt with Google may be worth the effort. “This particular antitrust case really focuses narrowly on the company’s popular search engine, and it alleges that Google uses their 90% market share to illegally throttle competition in search and search advertising,” Kemp says. CTO Jimmie Lee with XFactor.io, a developer of a business decision platform, says he can understand some of big tech’s perspective having come from Meta, Facebook’s parent, and Microsoft. “When you’re in the company, it feels very different from being on the outside,” he says. “From the inside, you see the strength of the technology and how you can better add security and privacy and features and functionalities throughout the entire stack and workflow.”


4 steps for purple team success

Purple teaming is a function of collaborative security. Historically, it has literally brought together offensive security engineers or pen testers from the red side of the team and investigators, detection engineers, and CTI analysts from the blue side of the team. More recently, however, purple teams have looked very different, including a variety of members including developers, architects, information system security officers, software engineers, DFIR teams, and BCP personnel as well as other departments. To view the purple team simply as a tactical unit would be an oversimplification. Beyond the immediate operational benefits, the true value of a purple team lies in fostering cyber resilience. It is about building an organizational capability that can not only withstand cyber threats but also adapt and recover swiftly from them. By collaboratively assessing, learning, and adapting, the purple team approach instills a resilience mindset, ensuring that the organization is prepared for evolving cyber threats and is capable of bouncing back even when breaches occur.



Quote for the day:

"If you don’t build your dream, someone else will hire you to help them build theirs." -- Dhirubhai Ambani