Daily Tech Digest - November 16, 2023

The Digital Revolution and the Demand for Cyber Insurance

The problem with cyberattacks is this – it’s extremely unlikely that any individual or an organization can bring the potential of an attack down to zero. At the same time, with the advancements in technology – the attacks and risks are going to only get more complex and sophisticated. Hence, even the best firewalls and cybersecurity practices might prove toothless in the face of a highly-coordinated attack. The prognosis gets worse when one looks at the current state of preparedness. ... Cyber insurance policies have proven effective in helping businesses pay off liabilities arising from stolen customer data, compromised passwords, breached bank accounts, frozen databases and a lot more. A comprehensive and dynamic cyber insurance protection becomes necessary for almost every business when one considers the severe reputational damage, regulatory fines, legal charges and customer obligations that arise from a cyber attack incident. The best part about cyber insurance is that many insurers also assist businesses in managing the attack – including response, negotiating ransom, legal proceedings and further protection to prevent a repeat of such attacks in the future.


From PKI to PQC: Devising a strategy for the transition

“Independently of PQC as a topic, one of the challenges often voiced by our customers is that public key infrastructure (PKI) can exist in a company in a broad range of departments, making it difficult centralize the responsibility for and ownership of it,” Jason Sabin, Chief Technology Officer at digital security company DigiCert, told Help Net Security. Companies are solving that problem in different ways. In some cases, they centralize their cryptographic activity under one department and one head. In other cases, they create an acting committee, with stakeholders across the company who influence the direction of their programs. “The companies that have already started to centralize management have an organizational method to request that budget and schedule the activity. But for the organizations that have not, there’s a little bit of an organizational design challenge present. And that’s where, I think, the technology leaders need to partner with the business leaders to come up with the best organizational path forward,” he remarked.


Cisco: Generative AI expectations outstrip enterprise readiness

At the heart of most AI networks will be Ethernet, since high-bandwidth Ethernet infrastructure is essential to facilitate quick data transfer between AI workloads, Cisco stated. “Implementing software controls like Priority Flow Control (PFC) and Explicit Congestion Notification (ECN) in the Ethernet network guarantees uninterrupted data delivery, especially for latency-sensitive AI workloads.” For AI readiness, the Cisco research recommends that enterprises build in automation tools for network configuration in order to optimize data transfer between AI workloads. “Automation reduces manual intervention, improves efficiency, and allows the infrastructure to dynamically adapt to the demands of AI workloads,” the researchers said. ... A core component of the data center AI blueprint is Cisco’s Nexus 9000 data center switches, which support up to 25.6Tbps of bandwidth per ASIC and “have the hardware and software capabilities available today to provide the right latency, congestion management mechanisms, and telemetry to meet the requirements of AI/ML applications,” Cisco stated.


Security Is a Process, Not a Tool

A new approach is required that can be more easily scaled to record and map myriad interactions and processes continuously and at enterprise scale. Enter process mining for cybersecurity. Process mining has existed in numerous industries for over a decade. From enterprise resource management (ERP) systems to robotic process automation (RPA), where mapping a process is the first stage of deployment, capturing human interactions with technology as they run through their jobs is a familiar strategy. However, this approach has not been applied to cybersecurity for a handful of reasons. First, analyzing and cataloging processes is tedious work that many cybersecurity and IT teams prefer to leave to auditors. Asking the cybersecurity or IT or networking teams to add this to their already heavy workloads of monitoring and securing infrastructure and software is unsustainable. Second, while cybersecurity and audit teams have long relied on data collected by agents, that data is largely tied to events and changes in security tools, not on processes. 


Info Stealers Thrive in Hot Market for Stolen Data

Despite fierce competition and the ever-present threat of takedowns, info-stealer innovation continues as newcomers debut constantly and existing players refresh their offerings regularly. ... Researchers said the info stealer is being spread through a variety of common distribution tactics, including malicious websites - often disguised as a legitimate installer - as well as drive-by downloads and phishing campaigns. First discovered in 2020, attackers have previously spread the malware using search engine optimization poisoning techniques. At the beginning of 2022, researchers at BlackBerry warned that the malware was often "being bundled with legitimate, signed software" a ploy that "makes it difficult to detect the threat before it has been deployed onto a victim system." Crypto wallets remain one of Jupyter's top targets. The malware searches outright for data files tied to 17 different types of wallets - including Atomic, Guarda, SimplEOS and NEON - as well as for wild-card filenames based on the word "wallet," plus OpenVPN and remote desktop protocol credentials, BlackBerry reported.


Microsoft launches Fabric, adds Copilot for the new platform

Microsoft Fabric is a new environment for data integration, data management and analytics, bringing together a set of capabilities that enable customers to model and analyze data in myriad ways. The suite includes Power BI, Microsoft's longstanding traditional BI platform on which users can develop and consume data products such as reports, dashboards and AI and machine learning models. In addition, Fabric includes Azure Synapse Analytics, a cloud-based service for data integration, data warehousing and big data analytics. Finally, Fabric includes Azure Data Factory, an extract, transform and load service that enables customers to integrate and transform data at scale. In addition to the three formerly separate platforms, Fabric includes a multi-cloud data lake called OneLake that automatically connects to every data workload within Fabric. OneLake comes with shortcuts to data sources such as Azure Data Lake Storage Gen2 and Amazon S3.The combination of the previously disparate platforms in a single environment is designed to simplify data management and analysis, according to Microsoft.


The Importance Of Continuous Testing In Agile And DevOps Environments

By embracing continuous testing, businesses can reduce test cycle times. This accelerated testing process integrates testing seamlessly throughout the development life cycle. As a result, organizations can deliver high-quality software faster, ensuring a quicker time to market. This agility allows them to respond promptly to customer demands, stay ahead of the competition and seize valuable market opportunities. ... Implementing continuous testing can increase collaboration among teams. By fostering effective collaboration and communication between development, testing, and operations teams, organizations can create a seamless integration of testing within their agile and DevOps processes. This alignment enables them to streamline workflows, enhance knowledge sharing, and drive efficient cross-functional teamwork, ultimately resulting in better software quality and faster time to market. ... By prioritizing continuous testing, organizations deliver bug-free and feature-rich products consistently, meeting or exceeding customer expectations. 


Five Ways for Digital Trust Professionals to Improve Soft Skills

“Emotional intelligence is the ability to recognize, understand, manage, and effectively respond to one’s own and others’ emotions. The reality is that soft skills and emotional intelligence are even more important than technical skills as predictors of success. “Soft skills are important for auditors and cybersecurity professionals because they facilitate effective communication, build client confidence, foster trust, enhance teamwork and enable a nuanced understanding of organizational dynamics and human behavior, all of which are essential for identifying risks, conveying complex findings and ensuring the successful implementation of recommended security measures. ... Problem-solving: Problem-solving remains a constant need, particularly as AI and LLMs introduce new challenges. The stable importance of this skill indicates that while AI can provide significant data analysis, humans are still needed for ultimate decision-making, especially when ethical or complex considerations are involved."


How US SEC legal actions put CISOs at risk and what to do about it

If CISOs find themselves in this awkward position, step one is to meet with the general counsel and listen to the attorney's reasoning for the changes, Rasch said. If the CISO is still not satisfied, the next step is to meet with the CEO and listen to the chief executive's rationale. If the CISO is still not satisfied, Rasch suggests hiring outside counsel to offer an ostensibly objective assessment of whether the filing constitutes legal fraud. ... After retaining counsel, all subsequent moves are fraught with danger. "If the CISO believes that there has been a fraud to the SEC, the CISO has an obligation to report it to the board. That may itself be corporate suicide," Rasch said, adding that the next move-going to the feds-is even more problematic. "Going to the SEC is crossing the Rubicon." "The CISO is not an expert on SEC disclosures, but you have an officer who now knows that the company made materially false disclosures," Rasch said. "There is a legal obligation for the CISO to do so if the CISO is right. And only if the CISO is right."


Agile Coaching as a Path toward a Deeper Meaning of Work and Life

Agile coaching can only be learned in actual interactions with people, not by learning methodologies or concepts in classrooms. Of course, classroom training, books, and individual coaching can be of great help. However, the main purpose of all this is to build a social setting, where people can develop new ways of participating in conversations about work. In practice, the best way to learn is to participate in various coaching situations with a more experienced coach. The key is to reflect what kind of cooperative patterns emerged in those situations, and how they were handled. It is also important to learn where the focus of agile coaching is. When an expert, such as a software designer, focuses on developing a product, an agile coach focuses on the social patterns that are emerging around that product development. In practice, the coach learns to focus on the conversations and interactions between the software designers and helps to transform the cooperative patterns of product development. 



Quote for the day:

"Don't wait for the perfect moment take the moment and make it perfect." -- Aryn Kyle

Daily Tech Digest - November 15, 2023

The IT Jobs AI Could Replace and the Ones It Could Create

Knowledge base managers and data scientists will be essential roles for enterprises as more and more data is fed into large language models (LLMs). “It's still a garbage in, garbage out problem, and if AI will now do more of our work, what we feed them is more important than ever,” says Katz. De Ridder expects to see prompt engineering to emerge as an important skill in the IT field rather than a distinct job. He describes new jobs that could come of the AI boom: agent and multiagent engineers. Agent engineers would maintain and adjust the AI agent processes, while multi-agent system engineers would function as project managers overseeing the complex processes and outcomes supported by multiple AI agents. These jobs will have myriad specializations tied to different fields, according to De Ridder. As more and more AI use cases emerge, IT workers could increasingly be looked at as AI co-pilots. How will they work alongside this technology to improve productivity, and how will they oversee AI capabilities to ensure the desired outcomes?


Microsoft Zero-Days Allow Defender Bypass, Privilege Escalation

But as with every Microsoft monthly update, there are several bugs in the latest batch that security experts agreed merit greater attention than others. The three actively exploited zero-day bugs fit that category. One of them is CVE-2023-36036, a privilege escalation vulnerability in Microsoft's Windows Cloud Files Mini Filter Driver that gives attackers a way to acquire system-level privileges. Microsoft has assessed the vulnerability as being a moderate — or important — severity threat but has provided relatively few other details about the issue. Satnam Narang, senior staff research engineer at Tenable, identified the bug as something that is likely going to be of interest to threat actors from a post-compromise activity standpoint. An attacker requires local access to an affected system to exploit the bug. The exploitation involves little complexity, user interaction, or special privileges. Windows Cloud Files Mini Filter Driver is a component that is essential to the functioning of cloud-stored files on Windows systems, says Saeed Abbasi, manager of vulnerability and threat research at Qualys. 


How to infuse strategy into everything your company does

The strategic goal-setting landscape is evolving, moving beyond global companies like Patagonia. It’s shifting from top-down mandates to a dynamic, bidirectional model that fosters ambition and collaboration at all levels. In highly successful organizations like LeanIX, an enterprise architecture management firm, we have watched how OKRs have been both a philosophy and a recipe for success and growth. LeanIX’s use of OKRs is not just a way to break down the company’s strategy and to agree on a common focus for the quarter; it’s an integral part of adopting a growth mindset. This ensures that the entire organization is continuously thinking big, aiming high, and trying out new approaches to achieve the next significant leap. ... Contemporary boardrooms have to echo the aspirations and values of Gen Z, emphasising both diversity and innovation. Merely having organizational strategies and cultural values framed and displayed on walls won’t suffice. They must be actively lived and practiced. Over a third of Gen Z expect leaders to not just lead but inspire. They demand a transparency that goes beyond open communication. 


The Art of Digital Continuity: Ensuring Data Availability in Disasters

During disasters, managers and IT employees bear the emotional burden of maintaining a calm and efficient work environment. This emotional labor can lead to stress and burnout, so managing it is key to maintaining productivity and data security during disasters. Here are some ways these professionals can cope with the emotional toll: Communication - Open and honest communication about the disaster’s impact is key for managing emotions. Keeping employees informed can help them feel more in control of the situation. Support - Providing psychological support, such as counseling or mental health resources, can help employees cope with stress and anxiety during a disaster. Training - Prioritizing training on disaster response and emotional management can prepare IT professionals for high-stress situations better. ... Remember that disaster preparedness is not a one-time effort — it requires continuous monitoring, testing, and adaptation to protect valuable data. When disaster strikes and data is lost, the first step is to create a new and improved information security plan. 


Four Levels of Agile Requirements

Visioning: This is the initial step of gathering requirements. The goal is to help identify all the Themes and some features desired. This exercise begins to define the scope of what is expected. Brainstorming: The goal of this step is to identify all the features and stories desired. The key here is Breadth First, Depth Later. So instead of discussing the details of each feature and story, our main goal is to FIND all the features and stories. Breakdown: The goal of this step to break down and slice the stories that are still too large (EPICs) into smaller chunks. You probably have already done a lot of slicing during brainstorming, but as you comb your backlog, the team will realize that some stories are still too large to be completed within an iteration. Slicing stories is an art and I will dedicate an entire blog to it! Deep Dive: This is the step everyone wants to jump into right away! Yes, finally, let’s talk about the details. What will be on the screen, what are the exact business rules and how will we test them, what will the detailed process look like, what are the tasks we need to get done to complete this story.


Dynamic Availability: Protocol-Based Assurances

The distinctive feature of proof-based consensus protocols is the fact that the protocol continues to function even when there is only one miner. Therefore miner nodes are free to leave and re-enter the competition at any time. Thus, the protocol maintains availability even under undesirable network conditions. To deal with cases where there are multiple leaders (concurrent solvers of the puzzle), honest nodes follow a simple rule: select the ledger with the highest number of blocks (i.e., the longest chain). In cases where chains have equal lengths, pick the one that you witnessed the earliest. Note that, in the given scenario, there is no way to determine whether there is a set of adversaries that are processing a parallel ledger without informing the rest of the network until their ledger becomes longer than the chain of the benevolent node. When they have a longer chain, they reveal their chain, waiting for the rest of the network to adapt to it, thus effectively ignoring all transactions that were in the neglected blocks. Due to this, one can never be sure whether a transaction is irreversible.


Are firms using mergers and acquisitions to inherit talent?

“I don’t think there’ll be an explosion in the number of acquisitions over the year ahead, but the people and team acquisition element will play a bigger role than in the past,” she says. “Technology is moving so fast that if you acquire a team already working well together on bleeding-edge technology, you can be up and running from day one.” But purchasing a business to get hold of talent is one thing. Holding onto that talent to deliver on the hoped-for value from the acquisition is quite another. The problem here is that if employees are unhappy with the move, feel uncertain about the future, or cannot see any post-deal career progression opportunities, they will simply vote with their feet. ... A key problem with the way many M&A transactions are conducted though, he believes, is that “people tend to come last on the priority list after financing and geography” - even though “you’re asking them to do the equivalent of move home, which because the decision isn’t theirs, can feel threatening”. But Robbins warns: “You fundamentally need to retain people, skills and capabilities if the deal is going to be a success. The business depends on two things - its customers and its staff, and if you’re not giving them what they want, it’s not going to go well.”


Why the Future for Enterprise Success Has to be Agile

Agile solutions enable enterprises to mitigate risks and reduce project failures, gaining a competitive edge and seize new opportunities in the digital age. Through iterative development and continuous feedback cycles, organizations can identify and address potential issues early on. This piece-by-piece approach minimizes the likelihood of costly mistakes and allows for corrections and updates in real-time, ensuring successful project delivery. Working in an Agile way also means that enterprises can be better prepared for the hype points in technology, such as the boom of generative AI this year. Agile enterprises are much better positioned to react and readjust their offerings in real time, addressing the interests of their market, than those with lengthy, drawn-out development timelines. This isn’t to say that Agile enterprises aren’t planning ahead, but instead that they follow a test-and-learn approach, with their plans being flexible and malleable to the ebbs and flows of the market.


Developer Empowerment Via Platform Engineering, Self-Service Tooling

“As a developer the way we build, test and deploy has gotten more complex,” Medina said, in her role play as a developer, lamenting her loss of autonomy in this time of public cloud, serverless workloads and Kubernetes. “Unfortunately that means that, as a developer, if I want to have access to the things that I need when I want them, I’m at the mercy of other teams to bring things up for me. I’m at the mercy of the platform engineering team and I hate waiting for people to do things for me,” she said. Indeed a platform engineering team never is short on backlog items. But often they are stuck performing the operations role so much that they aren’t able to build those golden paths and automation. “OK, as platform engineers, we have the keys to the so-called cloud kingdom, but, listen, it’s not all about you. It’s not all about DevEx. We also have to maintain reliable systems. And it’s too much work and we are super stressed. We are at the point where we are drowning in Jira tickets,” Villela replied, wearing the hat of a platform engineer.


Understanding OWASP’s Bill of Material Maturity Model: Not all SBOMs are created equal

Much as with other industry efforts such as zero trust, the journey towards establishing widespread mature BOMs with sufficient detail and depth will be just that — a journey. That said, resources such as OWASP's SBOM Guide and the BOM Maturity Model can serve as great tools that organizations, software suppliers and consumers can use to mature their implementation of SBOMs and ensure they are providing sufficient insight and details to be used in activities such as software asset inventory, vulnerability management and software supply chain security. ... While the journey may seem daunting, the alternative is continuing the historical status quo of blind software consumption with limited transparency and insight into the software we are consuming, its lineage, who's been involved in it and what has occurred to it along the way. We wouldn't settle for this level of opaque risky consumption in other industries such as food and pharmaceuticals and with software increasingly driving nearly every aspect of society, we shouldn't settle for a lack of transparency here either.



Quote for the day:

"Difficulties strengthen the mind, as labor does the body." -- Seneca

Daily Tech Digest - November 14, 2023

Balancing act: CISOs knife-edge role in modern cybersecurity

Enhanced personal liability and duty of care are becoming increasingly unavoidable for many industries under the NIS2 (Network and Information Systems Directive) - a directive to set higher standards for cybersecurity across the European Union - and DORA (Digital Operational Resilience Act). This change is unnerving for CISOs as their role is officially recognized by regulators, shareholders, and customers. 62% cited concerns about personal liability in a recent global survey by Proofpoint, demonstrating the increased pressures of the role. ... Cybercriminals are already experienced users of AI, with ransomware producers incorporating AI and machine learning techniques into their malware while using it to target specific victims and evade antivirus software detection. Such use of advanced technology is expected to continue as ransomware developers become more proficient in their tactics and multiply the challenges CISOs will face. While AI can automate threat detection and response, it requires an understanding of past threat activity. 


Exploring the Role of Consensus Algorithms in Distributed System Design

Consensus, in the context of distributed systems, is the act of getting a group of nodes to agree on a single value or outcome, even if failures and network delays occur. This agreement is vital for the proper functioning of distributed systems, for it ensures that all nodes operate cohesively and consistently, even when they are geographically dispersed. ... At the heart of many consensus algorithms is the concept of Leader election, as it establishes a single node responsible for coordinating and making decisions on behalf of the group. In other words, this leader ensures that all nodes in the system agree on a common value or decision, promoting order and preventing conflicts in distributed environments. Fault tolerance is a critical aspect of consensus algorithms as well, as it allows systems to continue functioning even in the presence of node failures, network partitions, or other unforeseen issues. Consistency, reliability, and fault tolerance are among the primary guarantees offered. 


Rogue state-aligned actors are most critical cyber threat to UK

These groups have become emboldened to act with impunity regardless of whether or not they have Russia’s official backing, and the NCSC said it had “concerns” that these groups have a higher risk appetite than those advanced persistent threat (APT) actors – such as Sandworm – that operate as units of the Russian intelligence and military services. This makes them a far more dangerous threat because they may seek to attack CNI operators without constraint and without being able to fully understand, or control, the impact of their actions. The consequences of this could be exceptionally severe. At the same time, Russian APTs continue to advance their goal of weakening and dividing Moscow’s adversaries by interfering in the democratic process using mis- and disinformation and cyber attacks. ... Of particular concern next go round will be large language models (LLM), which will almost certainly be used to generate fabricated content and deepfakes before the election, and a developing trend of targeting the email accounts of prominent individuals, as previously reported.


Fostering an automation-driven operations mindset in enterprises

By embracing automation, companies are changing the way they operate. This can mean rethinking their entire business model to become more profitable and competitive. However, this change is not always easy. Businesses face various challenges, such as dealing with disruptions in the market, figuring out the right number of employees needed for their operations, and keeping up with the ever-changing market conditions. Businesses are recognising that in order to stay relevant and successful, they need to undergo a digital transformation. This means adopting new technologies and ways of doing things to achieve significant positive changes in their operations. Automation has the power to create these changes across all types of industries, including retail, logistics, manufacturing, and the BFSI sector. ... This shift is so significant that the market for industrial automation in India is expected to double from USD 13.23 billion in 2023 to USD 25.76 billion by 2028. This is a clear indication that companies are investing heavily in automation to ensure they remain competitive and up to date with the latest advancements.


MongoDB vs. ScyllaDB: A Comparison of Database Architectures

The MongoDB architecture enables high availability through the concept of replica sets. MongoDB replica sets follow the concept of primary-secondary nodes, where only the primary handles the write operations. The secondaries hold a copy of the data and can be enabled to handle read operations only. A common replica set deployment consists of two secondaries, but additional secondaries can be added to increase availability or to scale read-heavy workloads. MongoDB supports up to 50 secondaries within one replica set. Secondaries will be elected as primary in case of a failure at the former primary. ... Unlike MongoDB, ScyllaDB does not follow the classical relational database management system (RDBMS) architectures with one primary node and multiple secondary nodes, but uses a decentralized structure, where all data is systematically distributed and replicated across multiple nodes forming a cluster. This architecture is commonly referred to as multiprimary architecture. A cluster is a collection of interconnected nodes organized into a virtual ring architecture, across which data is distributed. 


Relationship management: The unsung art of optimizing IT teams

Getting the most out of IT staff and unleashing synergies among IT teams is among the more underappreciated skills an IT leader must have to optimize their organization’s efforts. And for that you must develop an uncanny knack for relationship management and an understanding of how differing personalities can enforce and work with one another to great effect. After all, IT brings together a diverse range of personalities, from statisticians, mathematicians, and developers who are rooted in the rigors of computer science, to liberal arts majors who might just as soon be writing a novel if it could pay the bills. So, how do you as an IT leader unify these wide-ranging personalities into a cohesive project team? The short answer is that you don’t try to change anyone. Instead, you seize on common goals most team members have: To see success, feel good about the work they do, and contribute in ways that play to their strengths — while avoiding what they find off-putting or unproductive.


As perimeter defenses fall, the identify-first approach steps into the breach

An identity-first strategy is all about knowing the identity of all humans and non-humans accessing points within the enterprise. In other words, the strategy calls for the organization to know each employee, contractor, and business partner as well as endpoint, server, or application that seeks to connect. It is often also called identity-centric or identity-first security. It's foundational to implementing zero trust because zero trust says trust no entity until that entity — whether human or machine — can authenticate that it is who it says it is and can verify it has been authorized to access the network, application, API, server, etc. that it's seeking to access. ... As Avijit explains, no single solution delivers an identity-first strategy. Rather, it requires a synthesis of policies, practices and technology — like nearly everything else in cybersecurity. Those elements must come together to achieve three key objectives, says Henrique Teixeria, senior director analyst at Gartner, a research and advisory firm. 


Collaborative strategies are key to enhanced ICS security

Cooperation between IT (information technology) and OT (operational technology) departments is extremely important to address unique security challenges in industrial sectors. The IT department is usually responsible for managing computer systems, networks, and data, while the OT department manages operating systems, industrial control systems, and sensors. Synergy between these departments allows for a better understanding and confrontation of threats involving industrial control systems. IT teams have expertise in information security, and OT teams have years of experience working with industrial systems. By combining the knowledge of both departments, one can proactively identify and address security vulnerabilities and threats. The advantages of training these departments with each other are many. First, understanding both aspects – INFORMATION and industrial technology – allows for more effective identification and analysis of security challenges that are specific to the industrial sectors. 


3 cybersecurity compliance challenges and how to address them

Changes in regulations can be as rapid as the introduction of new products or the emergence of new threats and attacks. Thus, organisations need to be agile enough to keep up with regulatory changes. Unfortunately, not many of us have the ability to do this on our own. Cybersecurity skills shortage continues to be a problem when it comes to compliance. Many organisations lack the right people to properly address cyber threats, let alone continuously monitor regulatory changes. The challenge of keeping up with changing regulations can be addressed with the help of resources that track updates for you. Often, these are related to specific business niches. For companies involved in credit and financial service operations, for example, the cybersecurity alerts of the National Association of State Credit Union Supervisors (NASCUS) provide up-to-date information on the latest regulations that affect those in the business of extending credit and other financial services. There are also regulation monitoring subscription services that provide updates on regulations in general. 


Ethical Considerations in AI and Cloud Computing: Ensuring Responsible Develop and Use

Transparency and ethics go hand in hand. With AI, transparency is an essential ethical practice that plays a role in meaningful consent, accountability, and algorithmic auditing. Transparency is essential for driving public acceptance and trust in AI. AI has been accused of having a “black box” problem, referring to the lack of transparency in how it operates and the logic behind its decisions. The use of complex algorithms and proprietary systems contributes to the problem. Ethical practices must address the black box issue by ensuring a high level of transparency in AI development and deployment. ... Assigning responsibility for the outcomes provided by AI-driven systems is perhaps the most important ethical consideration to be considered. If an AI-powered system guiding medical diagnosis makes a decision that leads to failed medical treatment, who should take responsibility? Is the AI developer, the technology firm that deployed the AI, or the doctor ultimately accountable for the bad information?



Quote for the day:

"A leader is one who sees more than others see, who sees farther than others see and who sees before others see.” -- Leroy Eimes

Daily Tech Digest - November 13, 2023

Navigating the Crossroads of Data Confidentiality and AI

Striking a balance between ensuring data privacy and maximizing the effectiveness of AI models can be quite complex. The more data we utilize for training AI systems, the more accurate and powerful they become. However, this practice often clashes with the need to safeguard privacy rights. Techniques like federated learning offer a solution by allowing AI models to be trained on data sources without sharing raw information. For the uninitiated, Federated Learning leverages the power of edge computing to train local models. These models use data that never leaves the private environment (like your phone, IoT devices, corporate terminals, etc.). Once the local models are trained, they are then leveraged to build a centralized model that can be used for related use cases. ... Due to the recent acceleration in the adoption of AI, government regulations play a pivotal role in shaping the future of AI and data confidentiality. Legislators are increasingly recognizing the significance of data privacy and are implementing laws such as the General Data Protection Regulation (GDPR) in Europe and the California Consumer Privacy Act (CCPA).


CISOs vs. developers: A battle over security priorities

“Developers and CISOs juggle numerous security priorities, often conflicting across organizations,” noted Luke Shoberg, Global CISO at Sequoia Capital. “The report emphasizes the need for internal assessments, fostering deeper collaboration, and building trust among teams managing this critical domain. Recognizing technical and cultural obstacles, organizations have made significant strides in understanding the importance of securing the software supply chain for sustained business success.” “The world of software consumption and security has radically changed. From containers to the explosion of open source components, every motion has been toward empowering developers to build faster and better,” said Avon Puri, Global Chief Digital Officer at Sequoia Capital. “But with that progress, the security paradigm has been challenged to refocus on better controls and guarantees for the provenance of where software artifacts come from and that their integrity is being maintained. The survey shows developers and security teams are wrestling with this new reality in the wake of major exploits like Log4j and SolarWinds.


Deception technology use to grow in 2024 and proliferate in 2025

It's worth mentioning that all scanning, data collection, processing, and analysis will be continuous to keep up with changes to the hybrid IT environment, security defenses, and the threat landscape. When organizations implement a new SaaS service, deploy a production application, or make changes to their infrastructure, the deception engine notes these changes and adjusts its deception techniques accordingly. Unlike traditional honeypots, burgeoning deception technologies won't require cutting-edge knowledge or complex setup. While some advanced organizations may customize their deception networks, many firms will opt for default settings. In most cases, basic configurations will sufficiently confound adversaries. Remember, too, that deception elements like decoys and lures remain invisible to legitimate users. Therefore, when someone goes poking at a breadcrumb or canary token, you are guaranteed that they are up to no good. In this way, deception technology can also help organizations improve security operations around threat detection and response.


What Role Will Open-Source Hardware Play in Future Designs?

The extent of open-source hardware’s impact on electronics design is still uncertain. While it could likely lead to all these benefits, it also faces several challenges to mainstream adoption. The most significant of these is the volatility and high costs of the necessary raw materials. Roughly 70% of all silicon materials come from China. This centralization makes prices prone to fluctuations from local disruptions in China or throughout the supply chain. Similarly, long shipping distances raise related prices for U.S. developers. Even if integrated circuit design becomes more accessible, these costs keep production inaccessible, slowing open-source devices’ growth. Similarly, industry giants may be unwilling to accept the open-source movement. While open-source designs open new revenue streams, these market leaders profit greatly from their proprietary resources. The semiconductor fabs supporting these large companies are even more centralized. It may be difficult for open-source hardware to compete if these organizations don’t embrace the movement.


How Should Developers Respond to AI?

“Unionizing against AI” wasn’t a specific goal, Quick clarified in an email interview with The New Stack. He’d meant it as an example of the level of just how much influence can come from a united community. “My main thought is around the power that comes with a group of people that are working together.” Quick noted what happened when the United Auto Workers went on strike. “We are seeing big changes happening because the people decided collectively they needed more money, benefits, etc. I can only begin to guess at what an AI-related scenario would be, but maybe in the future, it takes people coming together to push for change on regulation, laws, limitations, etc.” Even this remains a concept more than any tangible movement, Quick stressed in his email. “Honestly, I don’t have much more specific actions or goals right now. We’re just so early on that all we can do is guess.” But there is another scenario where Quick thinks community action would be necessary to push for change: the hot-button issue of “who owns the code.” 


Security, privacy, and generative AI

For many of the proposed applications in which LLMs should excel, delivering false responses can have serious consequences. Luckily, many of the mainstream LLMs have been trained on numerous sources of data. This allows these models to speak on a diverse set of topics with some fidelity. However, there is typically insufficient knowledge around specialized domains in which data is relatively sparse, such as deep technical topics in medicine, academia, or cybersecurity. As such, these large base models are typically further refined via a process called fine-tuning. Fine-tuning allows these models to achieve better alignment with the desired domain. Fine-tuning has become such a pivotal advantage that even OpenAI recently released support for this capability to compete with open-source models. With these considerations in mind, consumers of LLM products who want the best possible outputs, with minimal errors, must understand the data in which the LLM is trained (or fine-tuned) to ensure optimal usage and applicability.


How to keep remote workers connected to company culture

As important as workplace collaboration and communication tools are, technology alone can’t keep remote workers engaged with business objectives. Before the pandemic, auto finance firm Credit Acceptance centered its operations around in-person interactions in its offices, for which it got accolades; after COVID-19 arrived, the company’s 2,200 employees had to work remotely. “You didn't work from home at all – [only in] rare circumstances,” said Wendy Rummler, chief people officer at Credit Acceptance. “We considered our culture too important, [we believed that] we couldn't maintain it if we had a fully remote workforce, or even partially for that matter.” Fast forward a couple of years and the picture is markedly different now, with almost all staffers now fully remote. Internal pulse surveys have found that employee engagement has remained as high as before the pandemic, said Rummler. This is no accident, she said; Credit Acceptance deliberately set out to maintain its work culture without regular person-to-person interactions.


Should AI Require Societal Informed Consent?

The concept of societal informed consent has been discussed in engineering ethics literature for more than a decade, and yet the idea has not found its way into society, where the average person goes about their day assuming that technology is generally helpful and not too risky. In most cases, technology is generally helpful and not too risky, but not in all cases. As artificial intelligence grows more powerful and is applied to more new fields (many of which may be inappropriate), these cases will multiply. How will technology producers know when their technologies are not wanted if they never ask the public? ... One of the characteristics of a representative democracy is that -- at least in theory -- our elected officials are looking out for the well-being of the public. ... It is time for the government and the public to have a new conversation, one about technology -- specifically artificial intelligence. In the past we’ve always given technology the benefit of the doubt; tech was “innocent until proven guilty” and a long-time familiar phrase in and around Silicon Valley has been “it’s better to ask forgiveness, not permission.” We no longer live in that world.


Harnessing the potential of generative AI in marketing

Augmenting human creativity with the power of generative AI holds so much promise that the use cases we know now are only the tip of the proverbial iceberg. Companies that are looking to get a head start should, therefore, ensure that they have laid down the foundations for doing so. An important consideration in deploying generative AI is the availability of data. Contextualisation is a key benefit of generative AI and large language models (LLMs). But for enterprises with legacy, on-premise systems, their data is usually isolated within silos. Organisations looking to deploy generative AI solutions for their marketing efforts should leverage cloud data platforms to unify all their internal data. Aside from breaking down silos, businesses should also ensure seamless access to all their data. A lot of the data generated by marketing teams is either unstructured or semi-structured; such as social media posts, emails, and text documents, to name a few. Marketing teams should ensure that their cloud data platforms can load, integrate, and analyse all types of data.


Managing Missing Data in Analytics

Missing at Random (MAR) is a very common missing data situation encountered by data scientists and machine learning engineers. This is mainly because MCAR and MNAR-related problems are handled by the IT department, and data issues are addressed by the data team. MAR data imputation is a method of substituting missing data with a suitable value. Some commonly used data imputation methods for MAR are:In hot-deck imputation, a missing value is imputed from a randomly selected record coming from a pool of similar data records. In hot-deck imputation, the probabilities of selecting the data are assumed equal due to the random function used to impute the data. In cold-deck imputation, the random function is not used to impute the value. Instead, other functions, such as arithmetic mean, median, and mode, are used. With regression data imputation, for example, multiple linear regression (MLR), the values of the independent variables are used to predict the missing values in the dependent variable by using a regression model. Here, first the regression model is derived, then the model is validated, and finally the new values, i.e., the missing values, are predicted and imputed. 



Quote for the day:

"Failure isn't fatal, but failure to change might be" -- John Wooden

Daily Tech Digest - November 12, 2023

The metaverse has virtually disappeared. Here's why it's generative AI's fault

"It's basically going through the Gartner Hype Cycle for Emerging Technologies," she says. "We've had the hype and now we're seeing the reality. The metaverse was capturing people's imagination. But we're still looking for proven use cases that are going to generate value." Searle's assertion that the metaverse is suffering a familiar fate to other over-hyped technologies is certainly one explanatory factor for the drop in interest in the metaverse. But another huge contributory factor is the rapid rise of artificial intelligence (AI). ... Of course, the rapid take up of generative AI isn't the only narrative in this story; there's a whole series of potential concerns, such as hallucinations, plagiarism, and ethics, that need to be dealt with sooner rather than later. But if you want to impress your family and friends with a tool that seems to work like magic, then generative AI is the one. On the other hand, the metaverse -- just like the blockchain before it -- feels a bit like a rabbit that's stuck in a magician's hat. Entering the metaverse often isn't as easy as its proponents have promised. 


Why the service industry needs blockchain, explained

The difficulty of integrating blockchain with existing infrastructure and processes is a significant obstacle. Because service providers frequently use a variety of platforms and technologies, achieving seamless integration can be difficult. It might be difficult to protect data security and privacy while still adhering to regulations. Blockchain’s transparency conflicts with the requirement to protect sensitive customer information, necessitating careful design and implementation of privacy measures. Another major challenge is establishing communication and data exchange across various blockchain networks and traditional systems. To facilitate seamless interoperability, service providers need to spend time developing standardized protocols, which can be expensive and time-consuming. Moreover, there are scalability concerns. Blockchain networks, especially public ones, may face limitations in handling a high volume of transactions efficiently. Delays and higher expenses may result from this, especially in service industries where several quick transactions are necessary.


Why developer productivity isn’t all about tooling and AI

Creative work requires some degree of isolation. Each time they sit down to code, developers build up context for what they’re doing in their head; they play a game with their imagination where they’re slotting their next line of code into the larger picture of their project so everything fits together. Imagine you’re holding all this context in your head — and then someone pings you on Slack with a small request. All the context you’ve built up collapses in that instant. It takes time to reorient yourself. It’s like trying to sleep and getting woken up every hour. ... Another factor that gets in the way of developer productivity is a lack of clarity on what engineers are supposed to be doing. If developers have to spend time trying to figure out the requirements of what they’re building while they’re building it, they’re ultimately doing two types of work: Prioritization and coding. These disparate types of work don’t mesh. Figuring out what to build requires conversations with users, extensive research, talks with stakeholders across the organization and other tasks well outside the scope of software development. 


Here’s What a Software Architect Does in an Agile Team

An architect is probably not a valid role on an agile team. I admit I have at times been overzealous with non-coding members of a dev team. The less militant version of this is to be aware of ‘pigs’ and ‘chickens’ in the agile sense. When making breakfast, chickens lay eggs but pigs literally have skin in the game. So only pigs should attend daily agile stand ups. There are three problems with the role of architect in classic agile. Think of these as Lutheran protestant theses nailed to the door — or more likely to the planning wall.There are no upfront design phases in agile. “The best architectures, requirements, and designs emerge from self-organizing teams”. An architect cannot be an approver and cause of delay. This leads to the idea that architectural know-how should be spread out amongst the other team members. This is often the case — however it elides the fact that architectural responsibility doesn’t fall to anyone, even if people feel they may be accountable. Remember your RACI matrix. Should all agile developers be architects in a project? This makes little sense, since architecture describes a singular plan. 


AI’s Ability to Reason: Statistics vs. Logic

As a simplistic existence proof that today’s AI does not reason with logic, consider the following problem in basic algebra which was given to Bing/OpenAI GPT to solve. The gist of the problem shown in the figure below is that there are two rectangles, each having the same height (though this detail is not clearly stated in the sourcing 6th grade math text) but different widths. Areas for each are given. The rectangles are positioned in the corresponding math text to suggest that they may be aggregated into a larger rectangle having a width that is the sum of the widths of the smaller rectangles — maybe as a hint toward length. The request to find the length (height) and widths is a test to see whether OpenAI’s GPT via Bing would determine if there are sufficient equations matching unknowns. There aren’t. GPT didn’t discover the number of equations is one too few. Instead, it attempted to find length and widths, and it responded suggesting it had successfully solved the math problem. Everything started to go amuck when the insufficiency of the number of equations matched to the number of unknowns was missed, and the third equation given above simply is a function of the other two.


Security Is EVERYBODY’s Business, But CISOs Need to Lead

Cybersecurity is not an audit or internal audit. There is a fine line of difference there. And as much as the CISO is seen as somewhat more of an enforcer, they need to be seen as an enabler to the business. CISOs need to have very direct, effective, and transparent communication with the board members when it comes to quantification of everything that they’re doing. And when I say quantification, what I mean is quantification of risks to the organization. Some of the board members will be closer to cybersecurity risks. Some of them may be closer to a reputational risk or a financial risk. But if a CSO can stitch that story together and quantify it for the audience of the board, I think that goes a long way. That’s what’s needed because, in the situation in the market that we are in right now, with the threat landscape changing, with new capabilities coming into play, I think it’s critical. CISOs need to ensure the message is articulated well in the boardroom.


How Agile Managers Use Uncertainty to Create Better Decisions Faster

Here's the problem I see with big, long-term, and final management decisions: the decision is too large to have any certainty at all. Remember I said I don't take long consulting engagements? Early in my consulting career, I learned that even a “guaranteed” consulting project was not a guarantee at all. Sure, the client might pay a kill fee (a portion of the unused project budget), but most of the time, the client said (on a Friday afternoon), “Thanks. The world has changed. Don't come back on Monday.” While I always continued my marketing so my business would survive, I felt as if the clients cheated themselves. Because we thought we had more time, we didn't create smaller goals and achieve them. Our work was incomplete—according to their goals. And that's what people remember. Not that they changed the circumstances, but that we didn't finish. That's exactly what happens when managers try to decide for a long time without revisiting their decisions. The world changes. If the world changes enough, the managers feel the need to lay people off, not just stop efforts. Those layoffs are a result of too-long and too-large management decisions.


Technical and digital debt can devastate your digital ambition

Of course, no organisation can afford zero technical debt (is this even possible?). The judgement here is targeting existing technical debt in a priority order. Deciding what not to do is just as important as what to do. You will be better able to manage the high expectations of stakeholders, shape the transformation and prioritise investment when you have this insight. Ask yourself these questions:What technical debt will act as an anchor when trying to increase the pace of change, irrespective of how fast your new IT engineering and product-based approaches to change are? Or to put it another way, which single piece of technical debt will limit the flow of value, irrespective of how slick everything else is? To be able to adapt at pace, at short notice, responding to market opportunities, where is your underlying technology strong but resistant to change? Customers just expect your digital channels to work; where must you improve the reliability of your service? Where can you increase cost effectiveness or risk mitigation through targeted automation as one of the treatment strategies available to you?


How AI and Crypto is Transforming the Future of Decentralized Finance

As time has passed, the crypto industry has evolved into a breeding ground for fraudulent activities and deception. Safeguarding investors from fraud has become increasingly vital, especially with the influx of initial coin offerings and new platforms entering the market. The encouraging news is that AI and crypto can effectively prevent fraud attempts and ensure that investors adhere to financial compliance. AI bots, for example, can detect and flag fraudulent transactions, preventing them from proceeding unless confirmed by a human. Confirming crypto transactions often takes up to 24 hours due to reliance on consensus methods. However, cases of transaction delays often pose a challenge for the crypto sector. With some recent advancements in AI technology, there have been some enhanced trade management options. Some companies are adopting innovative consensus methods that significantly reduce transaction times to just a few seconds. This improvement holds potential benefits for the 


Secure Together: ATO Defense for Businesses and Consumers

First off, businesses need to take the lead in forming a stronger partnership with their customers. This means educating both customers and employees on proper security measures. Websites operating with user accounts, engaging individuals and corporations, often find themselves in the crosshairs of swindlers intent on ATO. We mentioned above that phishing is a common tactic. It’s imperative to consistently enlighten customers and employees about the looming menace of online security breaches like phishing, including how phishing attempts trick people and tips for not getting tangled. Adopt a vigilant stance on security by ingraining robust preventive protocols, including routine password updates and providing guidelines for safeguarding user credentials. ... Training does not end there. The MGM Resorts cyberattack we cited above also involved a fraudster tricking a customer support help desk. Businesses must train their staff on how to stop these attempted breaches — for example, by knowing how to ask questions that only a legitimate account holder could know the answer to.



Quote for the day:

"You may be good. You may even be better than everyone esle. But without a coach you will never be as good as you could be." -- Andy Stanley

Daily Tech Digest - November 11, 2023

Mika becomes world's first robot CEO

In the era where many workers are worrying about artificial intelligence (AI) replacing their jobs, one company has announced that it is hiring the first humanoid robot chief executive officer (CEO). Dictador, a spirit brand based in Colombia’s Cartagena, has gone viral for appointing Mika, who is manifested as a robot. Mika is a research project between Hanson Robotics and Dictador. It has been customised to represent company value. Hanson Robotics also created Sophia, the popular humanoid robot. ... At a recent event, Mika said, “My presence on this stage is purely symbolic. In reality, conferring an honorary professor title upon me is a tribute to the greatness of the human mind in which the idea of artificial intelligence was born. It is also a recognition of the courage and open-mindedness of the owner of Dictador, who entrusted his company to a humble spokesperson with a processor instead of a heart.” Emphasising on how she is better than current CEO’s including Musk and Zuckerberg, she said, “In reality the notion of two powerful tech bosses having a cage fight is not a solution for improving the efficiency of their platforms”. 


Four Recommendations to Improve the Cyber Resilience Act

Policymakers must take a more proportionate, risk-based approach to determining the risk level of a product with digital elements and offer greater certainty for manufacturers to ascertain if a product is a critical one. While the Commission’s original proposal categorised every product in several broad categories as critical, the co-legislators have now the opportunity to take a more sophisticated approach. We recommend leveraging the Council’s risked-based approach with some key amendments, outlined here. ... it is crucial that the reporting obligations are aligned with the NIS 2 Directive to streamline reporting requirements and to avoid an unmanageable reporting burden for manufacturers and responsible authorities. This means that reporting under should be made to the CSIRTs under a single distributed reporting platform, and the incident reporting on security incidents should only concern “significant incidents”, as outlined in the European Parliament’s text.


What is a digital transformation strategy? Everything you need to know

At its most basic level, a DX strategy is the use of digital technologies to create or reimagine how customers are served and how work gets done. A well-thought-out and well-crafted digital transformation strategy ensures an organization correctly identifies what products, services and work need to be created or reimagined to remain competitive. For nonprofits or government agencies, this might mean effectively and efficiently delivering on their missions. ... A thoughtful DX strategy also focuses the organization's attention, said Kamales Lardi, author of The Human Side of Digital Business Transformation and CEO of Lardi & Partner Consulting. More specifically, it focuses the organization on the most pressing digital initiatives -- those that deliver value toward meeting its enterprise-wide goals. Lardi said this approach keeps teams from pursuing initiatives that introduce new technologies without understanding how they'll deliver value or implementing transformation projects that only help segments of the enterprise.


SolarWinds Fires Back at SEC Fraud Charges

“We categorically deny those allegations,” SolarWinds’ blog post said. “The company had appropriate controls in place before SUNBURST. The SEC misleadingly quotes snippets of documents and conversations out of context to patch together a false narrative about our security posture.” SolarWinds’ blog post details what it says are false claims that the attack exploited a VPN vulnerability. Other technical issues regarding the companies’ compliance with National Institute of Standards and Technology (NIST) cybersecurity standards framework (CSF) are also defended in the post. “The SEC is mixing apples and oranges, underscoring its lack of cybersecurity experience,” the blog post charged. “… the SEC fundamentally misunderstands what it means to follow the NIST CSF.” However much of the SEC’s complaint focuses on Brown’s alleged mishandling of controls that led to the breach. SEC contends that Brown in 2018 and 2019 stated "the current state of security leaves us in a very vulnerable state for our critical assets," and that "access and privilege to critical systems/data is inappropriate."


Software Architecture Fundamentals: Building the Foundations of Robust Systems

Solutions architecture is the bridge between business requirements and software solutions. Architects in this domain transform business needs into comprehensive software designs, often through diagrammatic representations. They also evaluate the commercial impacts of various technology choices. Software architecture, the centerpiece of our discussion, is closely aligned with software development. It not only impacts the structural composition of software but also influences the organization’s structure. Software architects play a pivotal role in translating business objectives into concrete software components and their responsibilities, all while ensuring the system’s healthy evolution over time. ... In a distributed architecture, systems must adopt self-preservation mechanisms:Avoid overloading a failing system. Excessive requests to a struggling system can exacerbate the situation. Recognize that a slow system is often worse than an offline system in terms of user experience. A system should have a way to assess its health. 


Building resilience-focused organizations

Arguably, the most important aspect of building resilient software system is automation. It effectively reduces human error, speeds up repetitive tasks, and guarantees consistent configurations. Through the automation of deployment, monitoring, and scaling processes, software systems can quickly adapt to evolving conditions and recover from failures more efficiently. In order to automate build commands, Amazon created a centralized, hosted build system called Brazil. The main functions of Brazil are compiling, versioning, and dependency management, with a focus on build reproducibility. Brazil executes a series of commands to generate artifacts that can be stored and then deployed. To deploy these artifacts, Apollo was created. Apollo was developed to reliably deploy a specified set of software artifacts across a fleet of instances. Developers define the process for a single host, and Apollo coordinates that update across the entire fleet of hosts. Developers could simply push-deploy their application to development, staging, and production environments. No logging into the host, no commands to run. 


What Are Data Sharing Agreements and Why Are They Important?

Before establishing data sharing agreements, it is crucial to have a clear understanding of their purpose and scope. These agreements serve as legal documents that outline the terms, conditions, and responsibilities of all parties involved in sharing data. By comprehending the purpose and scope, organizations can ensure that they establish agreements that effectively protect their interests and meet their objectives. The purpose of data sharing agreements is multifaceted. ... Several key factors must be considered: Data protection laws: Organizations must comply with data protection laws that govern the collection, storage, and sharing of personal information. Intellectual property rights: Data sharing agreements should address ownership rights of the shared data, including any intellectual property rights associated with it. Clear provisions on how the data can be used, reproduced, or modified should be included. Confidentiality and security: Agreements should outline measures to protect the confidentiality and security of shared data. This includes provisions for encryption, access controls, breach notification procedures, and liability for any breaches. 


Cyberattack Forces San Diego Hospital to Divert Patients

The attack on Tri-City Medical is among a rash of similarly disruptive ransomware and other cyber incidents that have been relentlessly hitting healthcare sector entities, including regional hospitals, in recent years, months and weeks. That includes an October ransomware attack on five hospitals in Ontario, Canada, and their shared IT services provider, which has been disrupting patient care at the facilities for several weeks and for which recovery work is expected to last into mid-December (see: Ontario Hospitals Expect Monthlong Ransomware Recovery). The Canadian hospitals have been directing many patients, including some cancer patients who need radiology treatment, to seek medical care elsewhere (see: 5 Ontario Hospitals Still Reeling From Ransomware Attack). A study released in January by the Ponemon Institute surveying 579 healthcare technology and security leaders says that patient care diversions due to ransomware are on the rise.
ther facilities, up from 65% the year before.


Sure, real-time data is now 'democratized,' but it's only a start

"With platforms taking complexity away from the individual user or engineer, it has accelerated adoption across the industry. Innovation such as SQL support, help make it democratized and provide ease of access to the vast majority rather than a select few." ... Many companies' infrastructures aren't ready, and neither are the organizations themselves. "Some yet to understand or see the value of real-time while others are all-in, with solutions that were designed for streaming throughout the organization," says Raikmo. "Combining datasets in motion with advanced techniques such as watermarking and windowing, is not a trivial matter. It requires correlating multiple streams, combining the data in memory and producing merged stateful result sets, at enterprise scale and resilience." The good news is not every bit of data needs to be streaming or delivered in real time. "Organizations often fall into the trap of investing in resources to make every data point they visualize be in real time, even when it is not necessary," Jayaprakash points out. "However, this approach can lead to exorbitant costs and become unsustainable."


AI is the future of cybersecurity. This is how to adopt it securely

Used effectively, AI can help prevent vulnerabilities from being written in the first place—radically transforming the security experience. AI provides context for potential vulnerabilities and secure code suggestions from the start (though please still test AI-produced code). These capabilities enable developers to write more secure code in real time and finally realize the true promise of “shift left.” This is revolutionary. Traditionally, “shift left” typically meant getting security feedback after you’ve brought your idea to code, but before deploying it to production. But with AI, security is truly built in, not bolted on. There’s no further way to “shift left” than doing so in the very place where your developers are bringing their ideas to code, with their AI pair programmer helping them along the way. It’s an exciting new era where generative AI will be on the front line of cyber defense. However, it’s also important to note that, in the same way that AI won’t replace developers, AI won’t replace the need for security teams. We’re not at Level 5 self-driving just yet. 



Quote for the day:

"Nobody can go back and start a new beginning, but anyone can start today and make a new beginning." -- Maria Robinson