Daily Tech Digest - December 11, 2023

Enterprise Architecture – Supporting Resources on Demand

As the subscription economy grows, the market could become saturated with providers offering varying levels of service quality. Businesses should carefully evaluate their options, considering factors such as customer support, scalability, and the sophistication of available resources. The positive impact of selling EA as a subscription service, however, is clear. With more service providers offering cloud solutions, there is more competition for your business. You, as the business customer, have more options, which can lead to better services and pricing. Business customers of all sizes can get access to advanced technology and data storage capabilities through a subscription. This can open economic doors to developing nations, allowing business growth to more players who would otherwise not be able to participate in a digital transformation journey. This fosters a more inclusive and diverse tech landscape, where breakthroughs can emerge from unexpected corners of the business world. You can focus on growing your core business without the traditional burdens of upfront investment and the complexity of building and managing infrastructure from scratch.


Trends in Data Governance and Security: What to Prepare for in 2024

In 2023, many companies turned to do-it-yourself (DIY) data governance to manage their data. Yet, without seeking the help of data governance experts or professionals, this proved to be insufficient due to compliance gaps and the data security errors it leaves in its wake. While do-it-yourself data governance seemed like a cost-effective solution, it has serious consequences for companies leaving them exposed to data breaches and other security threats. This is because DIY data governance often lacks the comprehensive security protocols and expertise that professional data governance provides leading to both data breaches and other security threats. Worse, the approach often involves piecemeal solutions that do not integrate well with each other, creating security gaps and leaving data vulnerable to attack. As a result, DIY data governance may not be able to keep up with the constantly evolving data privacy landscape, including new regulations and compliance requirements. Companies that rely on do-it-yourself data governance are exposing themselves to significant risks and will see the repercussions of this in 2024. 


Generative AI is off to a rough start

One big problem, among several others that Duckbill Chief Economist Corey Quinn highlights, is that although AWS felt compelled to position Q as significantly more secure than competitors like ChatGPT, it’s not. I don’t know that it’s worse, but it doesn’t help AWS’ cause to position itself as better and then not actually be better. Quinn argues this comes from AWS going after the application space, an area in which it hasn’t traditionally demonstrated strength: “As soon as AWS attempts to move up the stack into the application space, the wheels fall off in major ways. It requires a competency that AWS does not have and has not built up since its inception.” Perhaps. But even if we accept that as true, the larger issue is that there’s so much pressure to deliver on the hype of AI that great companies like AWS may feel compelled to take shortcuts to get there (or to appear to get there). The same seems to be true of Google. The company has spent years doing impressive work with AI yet still felt compelled to take shortcuts with a demo. As Parmy Olson captures, “Google’s video made it look like you could show different things to Gemini Ultra in real time and talk to it. You can’t.”


CIOs grapple with the ethics of implementing AI

Even with a team focused on AI, identifying risks and understanding how the organization intends to use AI both internally and publicly is challenging, McIntosh says. Team members must also understand and address the inherent possibility of AI bias, erroneous claims, and incorrect results, he says. “Depending on the use cases, the reputation of your company and brand may be at stake, so it’s imperative that you plan for effective governance.” With that in mind, McIntosh says it’s critical that CIOs “don’t rush to the finish line.” Organizations must create a thorough plan and focus on developing a governance framework and AI policy before implementing and exposing the technology. Identifying appropriate stakeholders, such as legal, HR, compliance and privacy, and IT, is where Plexus started its ethical AI process, McIntosh says. “We then created a draft policy to outline the roles and responsibilities, scope, context, acceptable use guidelines, risk tolerance and management, and governance,” he says. “We continue to iterate and evolve our policy, but it is still in development. We intend to implement it in Q1 2024.”


Accenture takes an industrialized approach to safeguarding its cloud controls

Accenture developed a virtual cloud control factory to support five major, global cloud infrastructure providers and enable reliable inventory; consistent log and alert delivery to support security incident detection; and predictable, stable, and repeatable processes for certifying cloud services and releasing security controls. The factory features five virtual "departments". There's research and development, which performs service certification, control definition, selection, measurement, and continual re-evaluation; the production floor designs and builds control; quality assurance tests the controls; shipping and receiving integrates controls with compliance reporting tools; and customer service provides support to users after a control goes live. "What we decided to do was centralize that cloud control development, get all the needs into one place, start organizing them in a way that we could run them through a factory and get them out there so people can use common controls, common architecture that had a chance of keeping up with [our engineers'] innovation sitting on top of the [major cloud platforms'] innovation," Burkhardt says


Pressure on Marketers Will Drive Three Key Data Moves in 2024

Data clouds help achieve that goal. In both time and expense, organizations can no longer afford to jump between different systems to try to make sense of what a customer wants and formulate a real-time response in the moment of interaction. With a CDP sitting directly on top of a data cloud, it is easier and less expensive to build a unique customer profile and then activate that profile across multiple systems. Organizations recognize that first-party data is a valuable asset and is the foundation for delivering a personalized customer experience (CX), but for too long business users have been stymied by complex, unintegrated marketing stacks and time-consuming data transformations. That approach to making data actionable -- turning data into insight -- is no longer sustainable when customers expect real-time, personalized experiences that are consistent across channels. ... Moving to a data cloud and coupling it with a CDP’s automated data quality and identity resolution addresses these issues head-on, and that trend will continue -- particularly for customer-facing brands that see a data cloud with an enterprise-grade CDP as a relatively fast, inexpensive way to monetize their customer data.


Initial Agile Requirements and Architecture Modeling

Talk to most agilists, and particularly the purists, and they’ll claim that they don’t do any modeling up front. This of course is completely false, they just use different terminology such as “populate the backlog” rather than initial requirements modeling and “identify a runway” instead of initial architecture modeling. Sigh. Some of the more fervent agilists may even tell you about the evils of big modeling up front which is why they choose to eschew anything that smells like up-front thinking. ... The goal of initial architecture modeling on an agile team is to identify what the team believes to be a viable strategy for building the solution. Sufficiency is determined by your stakeholders – Can you exhibit an understanding of the existing environment, and the future direction of your organization, and show how your proposed strategy reflects that? Your initial architecture model should be JBGE in that it addresses, at a high-level, the business and technical landscapes that your solution will operate within. This modeling effort is often led, not dictated, by the architecture owner on your team.


Why are IT professionals not automating?

25% of participants highlighted cost and resource as potential obstacles. They wonder if they need to create a custom solution and, if so, whether it’s cost-effective or cheaper to continue with manual maintenance. They are also concerned about the resources required to maintain an automated solution. 20% admit that they and their teams lack the knowledge or expertise to choose an automated solution. They are not familiar with automation in general or the specific requirements of automating their systems. The survey results clearly indicate that many IT professionals are not familiar with or don’t see the value of certificate automation. Or is it that they didn’t think about it enough? After all, certificates have been part of our IT infrastructure for a very long time, while they are not exciting, they do work, so why fix something that is not broken? Unfortunately, when the 90-day Google edict eventually becomes reality, it will increase the need for renewal/replacement of SSL/TSL certificates by four times (4X) the current pace. IT professionals may be underestimating the burden that it will put on their teams. 


How Could AI Be a Tool for Workers?

The benefits for companies designing and using AI systems are vast and readily apparent. Tools that can complete work in a fraction of the time at a fraction of the cost are a boon for the bottom line. “The main beneficiaries of the technology are global technology giants primarily based in the United States,” says Michael Allen, CTO of enterprise content management company Laserfiche. He points out that these companies have the resources to accrue the massive amounts of data required to train AI models. Companies that adopt these powerful AI models can leverage them to cut costs. Allen points out that many companies will likely use AI to shift away from outsourcing. “A lot of firms outsource mostly routine clerical work to places like India, and I believe that's going to be threatened or impacted significantly by AI that will be able to do that work faster and cheaper,” he says. The way that AI devalues entry-level work is already being seen. Stephanie Bell is a senior research scientist at the nonprofit coalition Partnership on AI, which created guidelines to ensure AI economic benefits are shared. She offers examples in the digital freelance market. 


Bryan Cantrill on AI Doomerism: Intelligence Is Not Enough

Cantrill had titled his talk “Intelligence is not enough: the humanity of engineering.” Here the audience realizes they’re listening to the proud CTO of a company that just shipped its own dramatically redesigned server racks. “I want to focus on what it takes to actually do engineering… I actually do have a bunch of recent experience building something really big and really hard as an act of collective engineering…” Importantly, the common thread for these bugs was “emergent” properties — things not actually designed into the parts, but emerging when they’re all combined together. “For every single one of those, there is no piece of documentation. In fact, for several of those, the documentation was actively incorrect. The documentation would mislead you ... Cantrill put up a slide saying “Intelligence alone does not solve problems like this,” presenting his team at Oxide as possessed of something uniquely human. “Our ability to solve these problems had nothing to do with our collective intelligence as a team…” he tells his audience. “We had to summon the elements of our character. Not our intelligence — our resilience.”



Quote for the day:

“I'd rather be partly great than entirely useless.” -- Neal Shusterman

Daily Tech Digest - December 10, 2023

'Move Fast And Break Things' Doesn’t Apply To AI

Given the urgency around generative AI, those looking for a first-mover advantage or those fearing being left behind may be tempted to adopt the "move fast and break things" mantra. After all, it has long been a staple of Silicon Valley culture. But following it would be a mistake in this instance. ... Consider the analogy of building a house—you wouldn’t just start digging immediately. You need to lay the groundwork first. You need to be planning, consulting structural engineers, involving site visitors, commissioning architectural drawings and building control. It is all essential work that needs to be completed before a brick is laid. But once it has been, confidence in the build skyrockets because there is a clear procedure to follow. Going slower to go faster also applies to AI. Developing a strategy for AI requires deep expertise and a first-class analysis of organizational data. This involves getting a holistic view of the data within an organization, understanding which elements could have inherent bias and lead to the wrong insights and building a picture of the level of automation that could improve operational efficiency.


Generative AI as a copilot for finance and other sectors

While advanced AI technologies such as quantum computing and blockchain have long been a part of Moody's Analytics' IT arsenal, generative AI has spun off many complex models. That can be challenging for a company with large data sets that is concerned about data privacy and security, said Caroline Casey, general manager for customer experience and innovation at the company, in an interview. Before releasing its Research Assistant product on Dec. 4, Moody's created an internal copilot product -- not to be confused with Microsoft Copilot. Research Assistant is a search and analytical tool built on Azure OpenAI and uses OpenAI's GPT-4. "We know that the purpose of this is not to replace a human," Casey said. "It's to take out the kind of mundane work -- the trying to find information, the retrieval, the searching -- and actually help them to focus on where they've got the best expertise." Moody's began its journey in the summer after the CEO encouraged all employees of Moody's Corp. to be innovators. 


World’s First Cybersecurity & AI Guidelines: Experts Weigh in

Speaking to Techopedia, Nic Chavez, Field Chief Information Officer at DataStax, noted that one of the important takeaways is the cautious and collaborative approach employed by the UK to develop the guideline. “I think it’s important to recognize the caution and collaboration with which NCSC approached this endeavor. By seeking feedback from the international community, including other NATO nations, NCSC was able to triangulate recommendations that were reasonable, swiftly actionable and strong.” In his reaction, Jeff Schwartzentruber, Senior Machine Learning Scientist at eSentire and Industry Research Fellow at Toronto Metropolitan University, told Techopedia that releasing these AI guidelines is a step in the right direction as it will help to expand international cooperation and accelerate commitments on the regulation and appropriate use of AI technologies. “I see this as a positive step forward in terms of expanding the international cooperation and discourse on the regulation and appropriate use of AI technologies. ...”


How the blockchain industry can adopt cybersecurity

While the theoretical underpinnings of blockchain offer unparalleled security benefits, the practical implementation introduces potential vulnerabilities. One such vulnerability lies in the exchange of data between blocks, where cybercriminals can intercept and manipulate information. To fortify blockchain systems against such attacks, the adoption of advanced encryption measures becomes paramount. Just as Distributed Denial of Service (DDoS) attacks are thwarted in traditional systems, blockchain must implement robust encryption to safeguard data exchange between blocks. Another challenge in blockchain security arises from censorship attacks, where malicious validators intentionally disrupt or halt the blockchain protocol. Additionally, attackers may masquerade as validators, gaining trust within the system and executing Trojan attacks. To address these threats, it is essential to employ traditional cybersecurity strategies, including encryption, key management, and DNS hygiene. By integrating artificial intelligence (AI) into the system, organizations can enhance their ability to detect consensus attacks, particularly in Proof of Stake (PoS) validation methods.


SLAM Attack: New Spectre-based Vulnerability Impacts Intel, AMD, and Arm CPUs

The attack is an end-to-end exploit for Spectre based on a new feature in Intel CPUs called Linear Address Masking (LAM) as well as its analogous counterparts from AMD (called Upper Address Ignore or UAI) and Arm (called Top Byte Ignore or TBI). "SLAM exploits unmasked gadgets to let a userland process leak arbitrary ASCII kernel data," VUSec researchers said, adding it could be leveraged to leak the root password hash within minutes from kernel memory. While LAM is presented as a security feature, the study found that it ironically degrades security and "dramatically" increases the Spectre attack surface, resulting in a transient execution attack, which exploits speculative execution to extract sensitive data via a cache covert channel. ... AMD has also pointed to current Spectre v2 mitigations to address the SLAM exploit. Intel, on the other hand, intends to provide software guidance prior to the future release of Intel processors that support LAM. In the interim, Linux maintainers have developed patches to disable LAM by default. 


Taking a strategic view of telecom networks in Indo-Pacific

The Indo-Pacific is home to some of the world’s fastest-growing digital economies harnessing technology for national governance and economic development. Telecommunications connectivity – the internet and mobile penetration base – forms the backbone for these economies. Unsurprisingly, the telecom market in the region is witnessing an upgrade. By 2030, telecom companies are expected to invest US$259 billion in the development of networks in the region. These investments will foster the expansion of the digital economy and act as catalysts for innovation, growth and prosperity, with 5G playing an indispensable role in this. 5G represents a generational shift in wireless telecommunications – anchored on higher data transfer speed and ultra-low latency. It holds the promise of revolutionising how people communicate and consume content on the internet and transforming edtech, telemedicine, precision agriculture, and the Internet of Things. However, 5G technology is not cheap, and developing economies have faced budgetary constraints in deploying it.


How to stop digital twins from being used against you

Beyond device optimization and prolonged lifecycles, however, there’s a dark side of digital twins that warrants careful consideration and mitigation strategies. First and foremost, digital twins offer hackers another chance at sensitive company information, particularly when the device data is stored in plain text in the cloud. Providing these models with up-to-date data means providing sensitive information. This goes beyond mere device information, it can sometimes include the personally identifiable data of employees and customers. Meanwhile, the use of international servers to run digital twin operations further complicates things. Different jurisdictions count different privacy requirements, meaning that cross-border data exchanges to run these simulations can bring regulatory and compliance headaches. Additionally, the connected devices themselves can cause security issues. For example, IoT sensors sometimes operate on outdated and vulnerable operating systems. Additionally, cheap devices are well-known for default credentials and unencrypted communications, an important concern as more than two billion devices come online next year. 


The Role of Non-Executive Directors in Driving Innovation

The agile nature of startups grants them an advantage in driving disruptive innovation. They have a greater appetite for risk and tend to be nimbler than their established counterparts. Free from the shackles of middle management’s confining layers and quarterly reporting pressures, these small entities are often seen as the leaders of innovation. On the other side of the spectrum, large companies, despite having the funds to finance innovation, tend to exhibit risk avoidance to protect individual reputations and the status quo. But the acquisition of innovative companies can be a strategic move for larger corporations, provided the innovative culture of the smaller entity is preserved in the process. ... NEDs play an important role in balancing the need for funding innovation against potential impacts on existing business practices. But conflict often emerges between securing immediate profits for shareholders and investing in long-term growth fueled by innovation. However, there is evidence that companies built for the future—those that prioritize innovation—can generate shareholder returns almost three times greater than those of the broader market reflected in the S&P 1200.


Surviving The Polycrisis Of Technological Singularity

First, let us agree on what a technological singularity would look like. It is an idea that puts us in an era where predictability ceases to exist and the conventional understanding of technological evolution is of little use. Historically, we as a society have failed to predict the effects of technological evolutions. Humans have usually underestimated the effects of technological disruptions in the long term. The fusion of various revolutionary technologies in this era, such as quantum computing, nanotechnology, superconductivity and AI, will surely propel us into a zone of immense possibilities and daunting uncertainties that are hard to grasp, let alone predict. One could argue that, if we have this stage today, then we are in the nascent stages of technological singularity. The open letter that was written by leaders from various areas of society calling for a pause in AI development for six months is, to me, one clear signal of the beginning of technological singularity. We can slow down its pace, but we may not be able to stop it. At the core of this discussion lies these profound questions: Can humanity harness the potential of these technologies and mitigate the corresponding risks simultaneously?


DevOps Strategies for Connected Car Development

The connected car is a complex ecosystem of software systems. These vehicles have numerous systems that communicate with each other, the driver and the outside world. Managing the development of these systems can be a daunting task, and this is where DevOps strategies come in. DevOps aims to shorten the system development life cycle and provide continuous delivery with high software quality. This methodology is particularly suited to the complex software systems of connected cars, as it encourages a holistic view of the development process, ensuring that all components work together seamlessly. Moreover, DevOps helps to manage the complexity of car software systems by automating tasks, reducing errors and improving efficiency. The use of automated tools for configuration management, deployment and monitoring means less manual work, fewer mistakes, and quicker problem resolution. One of the greatest challenges in connected car development is the need for speed. In this fast-paced industry, companies are under pressure to develop and deploy new features quickly to stay competitive. 



Quote for the day:

"If you genuinely want something, don't wait for it--teach yourself to be impatient." -- Gurbaksh Chahal

Daily Tech Digest - December 09, 2023

AI in Biotechnology: The Big Interview with Dr Fred Jordan, Co-Founder of FinalSpark

Of course, the ethical consideration is increased because we are using human cells. From an ethical perspective, what is interesting is that all this wouldn’t be possible without the ISPCs. Ethically, we don’t need to take the brain of a real human being to conduct experiments. ... The ultimate goal is to develop machines with a form of intelligence. We want to create a real function, something useful. Imagine inputting a picture to the organoid, and it responds, recognizing objects like cats or dogs. Right now, we are focusing on one specific function – the significant reduction in energy consumption, potentially millions to billions of times less than digital computers. As a result, one practical application could be cloud computing, where these neuron-based systems consume significantly less energy. This offers an eco-friendly alternative to traditional computing processing. Ultimately, the future of AI in biotechnology holds huge potential for various applications because it’s a completely new way of looking at neurons. It’s like the inventors of the transistor not knowing about the internet.


AI regulatory landscape and the need for board governance

“We all need to have a plan in place, and we need to be thinking about how are you using it and whether it is safe.” She underscored the urgency, noting that journalists are investigating where AI has gone wrong and where it’s discriminating against people. Additionally, there are lawyers who seize potential litigation opportunities against ill-prepared, deep-pocketed organizations. "Good AI hygiene is non-negotiable today, and you must have good oversight and best practices in place," she asserted. Despite a lack of comprehensive Congressional AI legislation, Vogel clarified that AI is not without oversight. Four federal agencies recently committed to ensuring fairness in emerging AI systems. In a recent statement, agency leaders committed to using their enforcement powers if AI perpetuates unlawful bias or discrimination. AI regulatory bills have been proposed by over 30 state legislatures, and the international community is also ramping up efforts. Vogel cited the European Union's AI Act as the AI equivalent of the GDPR bill, which established strict data privacy regulations affecting companies worldwide.


Data Management, Distribution, and Processing for the Next Generation of Networks

Investments in cloud architectures by CSPs span their own resources – but they also extend to third parties; federated cloud architectures are the result. These interconnected cloud assets allow CSPs to extend their reach, share resources and collaborate with other stakeholders to secure desired outcomes. Why do we combine this with edge computing? Because resources at the edge may not be in the CSP’s own domain. Edge systems may be a combination of CSP-owned and other resources that are used in parallel to deliver a particular service. And, regardless of overall pace towards 5G SA, edge computing is now firmly in demand by enterprises (and CSPs), to support a new generation of high-performance and low latency services. This demand won’t only be served by CSPs, however. Many enterprises are seeking to deploy private networks – and the resources required to support their applications may be accessed via federated clouds. This user may not need its own UPF, but it may benefit from one offered by another provider in an adjacent edge location, or delivered by a systems integrator that runs multiple private networks with shared resources, available on demand.


Understanding Each Link of the Cyberattack Impact Chain

There are two ways to assess the cyberattack impact chain: Causes and effects. To build stakeholder support for CSAT, CISOs have to show the board how much damage cyberattacks are capable of causing. Beyond the fact that the average cost of a data breach reached an all-time high of $4.45 million in 2023, there are many other repercussions: Disrupted services and operations, a loss of customer trust and a heightened risk of future attacks. CSAT content must inform employees about the effects of cyberattacks to help them understand the risks companies face. It’s even more important for company leaders and employees to have a firm grasp on the causes of cyberattacks. Cybercriminals are experts at exploiting employees’ psychological vulnerabilities – particularly fear, obedience, craving, opportunity, sociableness, urgency and curiosity – to steal money and credentials, break into secure systems and launch cyberattacks. Consider the MGM attack, which relied on vishing – one of the most effective social engineering tactics, as it allows cybercriminals to impersonate trusted entities to deceive their victims.


Another Cyberattack on Critical Infrastructure and the Outlook on Cyberwarfare

Critical infrastructure attacks, like the one against the water authority in Pennsylvania, have occurred in the wake of the Israel-Hamas war. And geopolitical tension and turmoil expands beyond this conflict. Russia’s invasion of Ukraine has sparked cyberattacks. Chinese cyberattacks against government and industry in Taiwan have increased. “This is just going to be an ongoing part of operating digital systems and operating with the internet,” Dominique Shelton Leipzig, a partner and member of the cybersecurity and data privacy practice at global law firm Mayer Brown, tells InformationWeek. While kinetic weapons are still very much a part of war, cyberattacks are another tool in the arsenal. Successful cyberattacks against critical infrastructure have the potential for widespread devastation. “The landscape of warfare is changing,” says Warner. And the weaponization of artificial intelligence is likely to increase the scale of cyberwarfare. “We have the normal technology that we use for denial-of-service attacks, but imagine being able to do all of that on an even greater scale,” says Shelton Leipzig.


Continuous Testing in the Era of Microservices and Serverless Architectures

Continuous testing is a practice that emphasizes the need for testing at every stage of the software development lifecycle. From unit tests to integration tests and beyond, this approach aims to detect and rectify defects as early as possible, ensuring a high level of software quality. It extends beyond mere bug detection and it encapsulates a holistic approach. While unit tests can scrutinize individual components, integration tests can evaluate the collaboration between diverse modules. The practice allows not only the minimization of defects but also the robustness of the entire system. ... Decomposed testing strategies are key to effective microservices testing. This approach advocates for the examination of each microservice in isolation. It involves a rigorous process of testing individual services to ensure their functionality meets specifications, followed by comprehensive integration testing. This methodical approach not only identifies defects at an early stage but also guarantees seamless communication between services, aligning with the modular nature of microservices.


Understanding Master Data Management’s integration challenges

The integration of data within MDM is a very complex task, which should not be underestimated. Many organizations often have a myriad of source systems, each with its own data structure and format. These systems can range from commercial CRM or ERP systems to custom-built legacy software, all of which may use different data models, definitions, and standards. In addition, organizations often desire real-time or near-real-time synchronization between the MDM system and the source systems. Any changes in the source systems need to be immediately reflected in the MDM system to ensure data accuracy and consistency. Using a native connector from the MDM system to read data from your operational systems can provide several benefits, such as ease of integration. This has been illustrated at the bottom in the image above. However, the choice of using a native connector or a custom-built one mostly depends on your specific needs, the complexity of your data, the systems you’re integrating, and the capabilities of your MDM system.


Aim for a modern data security approach

Beginning with data observability, a “shift left” implementation requires that data security become the linchpin before any application is put into production. Instead of being confined to data quality or data reliability, security needs to become another use case application of the underlying data and be unified into the rest of the data observability subsystem. By doing this, data security benefits from the alerts and notifications stemming from data observability offerings. Data governance platform capabilities typically include business glossaries, catalogs, and data lineage. They also leverage metadata to accelerate and govern analytics. In “shift left” data governance, the same metadata is augmented by data security policies and user access rights to further increase trust and allow appropriate users to access data. Leveraging and establishing comprehensive data observability and governance is the key to data democratization. As a result, these proactive and transparent views over the security of critical data elements will also accelerate application development and improve productivity.


Google expands minimum security guidelines for third-party vendors

"The expanded guidance around external vulnerability protection aims to provide more consistent legal protection and process to bug hunters that want to protect themselves from being prosecuted or sued for reporting findings," says Forester Principal Analyst Sandy Carielli. "It also helps set expectations about how companies will work with researchers. Overall, the expanded guidance will help build trust between companies and security researchers." The enhanced guidance encourages more comprehensive and responsible vulnerability disclosures, says Jan Miller, CTO of threat analysis at OPSWAT, a threat prevention and data security company. "That contributes to a more secure digital ecosystem, which is especially crucial in critical infrastructure sectors where vulnerabilities can have significant repercussions," he says. ... The enhanced guidance encourages more comprehensive and responsible vulnerability disclosures, says Jan Miller, CTO of threat analysis at OPSWAT, a threat prevention and data security company. 


Europe Reaches Deal on AI Act, Marking a Regulatory First

"Europe has positioned itself as a pioneer, understanding the importance of its role as global standard setter," said Thierry Breton, the European commissioner for internal market, who had a key role in negotiations. The penalties for noncompliance with the rules can lead to fines of up to 7% of global revenue, depending on the violation and size of the company. What the final regulation ultimately requires of AI companies will be felt globally, a phenomenon known as the Brussels effect since the European Union often succeeds in approving cutting-edge regulations before other jurisdictions. The United States is nowhere near approving a comprehensive AI regulation, leaving the Biden administration to rely on executive orders, voluntary commitments and existing authorities to combat issues such as bias, deep fakes, privacy and security. European officials had no difficulty in agreeing that the regulation should ban certain AI applications such as social scoring or that regulations should take a tiered-based approach that treats high-risk systems, such as those that could influence the outcome of an election, with greater requirements for transparency and disclosure.



Quote for the day:

''It is never too late to be what you might have been." -- George Eliot

Daily Tech Digest - December 07, 2023

Top 5 Trends in Cloud Native Software Testing in 2023

As digital threats become more sophisticated, there’s a heightened focus on security testing, particularly among large enterprises. This trend is about integrating security protocols right from the initial stages of development. Tools that do SAST and DAST are becoming essentials in testing workflows. ... The TestOps trend integrates testing into the continuous development cycle, echoing the collaborative and automated ethos of DevOps. TestOps focuses on enhancing communication between developers, testers, and operations, ensuring continuous testing and quicker feedback loops. It leverages real-time analytics to refine testing strategies, ultimately boosting software quality and efficiency. Extending the principles of DevOps, GitOps uses Git repositories as the backbone for managing infrastructure and application configurations, including testing frameworks. ... The rise of ephemeral test environments is a game-changer. These environments are created on demand and are short-lived, providing a cost-effective way to test applications in a controlled environment that closely mirrors production


Dump C++ and in Rust you should trust, Five Eyes agencies urge

Microsoft, CISA observes in its guidance, has acknowledged that about 70 percent of its bugs (CVEs) are memory safety vulnerabilities, with Google confirming a similar figure for its Chromium project and that 67 percent of zero-day vulnerabilities in 2021 were memory safety flaws. Given that, CISA is advising that organizations move away from C/C++ because, even with safety training (and ongoing efforts to harden C/C++ code), developers still make mistakes. "While training can reduce the number of vulnerabilities a coder might introduce, given how pervasive memory safety defects are, it is almost inevitable that memory safety vulnerabilities will still occur," CISA argues. ... Bjarne Stroustrup, creator of C++, has defended the language, arguing that ISO-compliant C++ can provide type and memory safety, given appropriate tooling, and that Rust code can be implemented in a way that's unsafe. But that message hasn't done much to tarnish the appeal of Rust and other memory safe languages. CISA suggests that developers look to C#, Go, Java, Python, Rust, and Swift for memory safe code.


How the insider has become the no.1 threat

For the organisation, this means the insider threat has not only become more pronounced but harder to counter. It requires effective management on two fronts in terms of managing the remote/mobile workforce and dissuading employees from swapping cash for credentials/data. For these reasons, businesses need to reinforce the security culture through staff awareness training and step up their policy enforcement, in addition to applying technical controls to ensure data is protected at all times. That’s not what is happening today. The Apricorn survey found only 14% of businesses control access to systems and data when allowing employees to use their own equipment remotely, a huge drop from 41% in 2022. Nearly a quarter require employees to seek approval to use their own devices, but they do not then apply any controls once that approval has been granted. Even more concerning is that the number of organisations that don’t require approval or apply any controls has doubled over the past year. This indicates a hands-off approach that assumes a level of implicit trust, directly contributing to the problem of the insider threat.


WestRock CIDO Amir Kazmi on building resiliency

There are three leadership principles I would highlight that help build resilience in the team. First is recognizing the pace of change and responding to the impact it has on a team. It’s not getting slower; it’s getting faster. One of the behaviors that can help your team is to ‘explain the why.’ Set the context before the content behind what needs to be accomplished so we’re all on the same journey. Second is recognizing that we have to instill a learning and growth mindset in the culture, in the leadership, and in the fabric of what we’re trying to achieve. Many businesses are shifting their business models from product to service, and as leaders, it’s important to build a level of learning in that journey for your teams. One of the leaders that I admire and have learned from is John Chambers, who has said, ‘It’s all about speed of innovation and changing the way you do business.’ If we don’t reimagine ourselves, we will get disrupted. Third is transparency around what the key priorities are — because not everything can be a priority — and then creating flexibility around those priorities and how we get to the outcomes.


AI Governance in India: Aspirations and Apprehensions

While India’s stance on AI regulation has sometimes appeared to waver, it is steadily working towards establishing a clear regulatory approach and AI governance mechanism, especially as the country assumes a more prominent role in the area of AI-related international cooperation. AI-enabled harms and security threats exist at all three levels of the AI stack: At the hardware level, there are vulnerabilities in the physical infrastructure of AI systems. At a foundational model level, there are concerns around the use of inappropriate datasets, data poisoning, and issues related to data collection, storage, and consent. At the application level, there are threats to sensitive and confidential information as well as the proliferation of capability-enhancing tools among malicious actors. Therefore, while the governance of the tech stack is a priority, governance of the organisations developing AI solutions, or the people behind the technology, could also be productive. Even as democratisation has made AI more accessible, assigning responsibility and defining accountability for the operation of AI systems have become more difficult. 


Liability Fears Damaging CISO Role, Says Former Uber CISO

The average person on the street would think it reasonable that a CISO should be responsible for all aspects of an organization’s security, Sullivan acknowledged. However, the reality is the CISO role is unique among executive positions. “The CISO is fighting an uphill fight every day in their job. They’re begging for resources, they’re trying to get the rest of the company to slow down and think about the things they care about,” he noted. “Our job is different from everybody else’s. When you’re the executive responsible for security, you are the only executive who has active adversaries outside your organization trying to destroy you,” he added. ... Despite the growing personal risks for CISOs, Sullivan emphasized that “we should not run away from the situation,” adding that “if we do, we’ll miss a huge opportunity.” He believes there is a fundamental shift coming in terms of the regulation that’s on the horizon in cybersecurity, which will force organizations to revise how they approach security, and current security professionals must be to facilitate this change.


Middle East CISOs Fear Disruptive Cloud Breach

Data sovereignty regulations and de-globalization trends, for example, have led to the deployment of multi-cloud infrastructures that can support regional regulations and business mandates, according to the March research report, The Future of Cloud Security in the Middle East. "You will have your own cloud service provider within each country and already countries are adopting that culture — be it in the UAE or Saudi Arabia or any other country in the region," Rajesh Yadla, director head of information security for Al Hilal Bank, stated in that report. "The reason is to make sure that the cloud service providers are compliant with all these regulations." Business and government leaders have taken cybersecurity seriously, however, with security the top factor in choosing a cloud provider, with 43% of companies prioritizing security, compared to 19% prioritizing cost, according to the report. Both Saudi Arabia and the UAE rank in the top 10 nations for cybersecurity, as measured by the Global Cybersecurity Index 2020, the most recent cybersecurity rankings of countries across the globe compiled by the International Telecommunication Union (ITU).


Parenting in the Digital Age: A Guide to Choosing Tech-Enabled Preschools

In recent years, technology integration in preschoolers’ education has become a game-changer in delivering personalised learning. By making education more fun and interactive by using a robust arsenal – AR applications, ERP apps and much more, teachers and parents have been able to tap into the receptivity of young minds, paving the way for both cognitive and emotional development. Augmented Reality (AR) being an interactive experience assimilates the real world and computer-generated content. Additionally, it stimulates multiple sensory modalities, making a successful mark in opening up new avenues in preschool education. By allowing young learners to immerse in realistic experiences, AR elevates the learning process with computer simulations, 3D virtualisation, etc. making it enhanced, effective and evocative. Departing from the traditional chalkboard and chart paper educational approach for preschoolers, parents have seismically shifted their preference to a tech-integrated curriculum. The augment of AR technology for early childhood learning brings forth a layer of interactive and engaging experiences. 


Cyber Strategic Ambivalence Will Hit A Tipping Point In 2024

There are indications that technological advances, geopolitics, social influences, and other externalities are creating the conditions for what Thomas Kuhn coined the “paradigm shift” (his 1962 book, The Structure of Scientific Revolutions, described the dynamics and the framework by which structural change emerges). The conditions for change that will result in a paradigm shift are the breadth, types and severity of attacks that are ongoing and will likely increase in 2024. The assessed global cyberattack losses in 2023 amount to $8 trillion, which is larger than any national economy except for the US and China! In other words, the collective black market – the illicit profits generated from cybercrime – is a larger economy than Germany or Japan or India. That is a look at the problem in monetary terms. Cyberattacks are now regularly compromising critical infrastructure, which places public safety at risk. In May of 2023, Denmark’s critical infrastructure network experienced the largest cyberattack ever, which was highly coordinated and could have resulted in power outages. 


How server makers are surfing the AI wave

There appears to be strong demand for high performance computing (HPC) hardware that includes graphics processing units (GPUs) for accelerating the performance of workloads and GPU-based servers. ... There is a growing realisation among many businesses that the hyperscalers are behind the curve with regards to supporting the intellectual property of their GenAI users. This is opening up opportunities for specialist GPU cloud providers to offer AI acceleration in a way that allows customers to train foundational AI models based on their own data. Some organisations are also likely to buy and run private cloud servers configured as GPU farms for AI acceleration, fuelling the significant growth in demand for GPU-equipped servers from the major hardware providers. HPE recently announced an expanded strategic collaboration with Nvidia to offer enterprise computing for GenAI. HPE said the co-engineered, pre-configured AI tuning and inferencing hardware and software platform enables enterprises of any size to quickly customise foundation models using private data and deploy production applications anywhere.



Quote for the day:

''Your most unhappy customers are your greatest source of learning.'' -- Bill Gates

Daily Tech Digest - December 06, 2023

Three Ways Generative AI Is Overhauling Data Management

First, prioritize accuracy in SQL generation. NL2SQL has come a long way in understanding natural language queries, but some large language models (LLMs) are better than others in dealing with nuanced or complex questions. Second, ensuring efficient query execution on ad hoc questions is paramount. Historically, interactive querying in a data warehouse environment meant gathering requirements in advance and engineering the data through caching, denormalizing, and other techniques. Generative AI has changed expectations -- users now want immediate answers to novel questions. ... The shift towards vector embeddings is driven by the realization of the remarkable benefits they bring to storing and searching both structured and unstructured data as vectors. The core advantage of vector embeddings lies in their ability to represent complex data in an efficient format. By converting data into high-dimensional vectors, it becomes possible to capture the semantic relationships, context, and similarities between different data points.


Reinforcement learning is useful in environments where precise reward functions can guide the learning process. It’s particularly effective in optimal control scenarios, gaming and aligning large language models (LLMs) with human preferences, where the goals and rewards are clearly defined. Robotics problems, with their complex objectives and the absence of explicit reward signals, pose a significant challenge for traditional RL methods. ... Despite its advantages, imitation learning is not without its pitfalls. A notable issue is the “distribution mismatch problem,” where an agent may encounter situations outside the scope of its training demonstrations, leading to a decline in performance. “Interactive imitation learning” mitigates this problem by having experts provide real-time feedback to refine the agent’s behavior after training. This method involves a human expert monitoring the agent’s policy in action and stepping in with corrective demonstrations whenever the agent strays from the desired behavior.


Don’t make Apache Kafka your database

The right strategy is to let Kafka do what it does best, namely ingest and distribute your events in a fast and reliable way. For example, consider an ecommerce website with an API that would traditionally save all data directly to a relational database with massive tables—with poor performance, scalability, and availability as the result. Introducing Kafka, we can design a superior event-driven ecosystem and instead push that data from the API to Kafka as events. This event-driven approach separates processing into separate components. One event might consist of customer data, another may have order data, and so on—enabling multiple jobs to process events simultaneously and independently. This approach is the next evolution in enterprise architecture. We’ve gone from monolith to microservices and now event-driven architecture, which reaps many of the same benefits of microservices with higher availability and more speed. Once events are sitting in Kafka, you have tremendous flexibility in what you do with them. If it makes sense for the raw events to be stored in a relational databases, use an ecosystem tool like Kafka Connect to make that easy.


What it Takes to Be Your Organisation’s DPO or Data Privacy Lead

Just because you sought expert opinion on the matter a few years ago doesn’t mean you’re in the clear. ‘Once compliant’ doesn’t mean ‘still compliant’. It’s possible that you now need to appoint a DPO (data protection officer) or data privacy lead to be the single point of contact for questions, concerns, breaches, impact assessments or communication with the regulatory authorities. ... It’s not just the EU GDPR, the UK GDPR and the DPA 2018 that we may need to ensure compliance with. Privacy laws exist in almost every country and are relevant wherever you do business. You can design your data privacy systems such that they meet all these legal requirements. ... A DPO isn’t just a trusted adviser during business-as-usual times. They are at the command centre of a cross-functional team in tough times. Faced with an incident or a breach, a well-trained DPO can avert a crisis before social media can cause a catastrophe. Well-versed data privacy leads and DPOs can leap into action when needed, swiftly addressing and remediating issues, reporting to the necessary authorities and instigating lasting change. 


What should be in a company-wide policy on low-code/no-code development

The lesson here is to be thorough in assessment and then document and define existing use cases. Understanding why certain user groups are currently leveraging a particular low-code/no-code platform will help security and business leaders make risk calculations that will determine the course of future policies. The most immediate policy that will come out of this work will be one that defines acceptable use cases for low-code/no-code across the business. “Specify the application of low-code and no-code development across departments, as well as clearly state the purpose of low-code and no-code development,” says Vikas Kaushik, CEO of TechAhead, a development consultancy. This policy of purpose and scope is crucial for setting the course and the tone of the risk management policies around use cases. Some companies may choose to be very granular about this, breaking it down by lines of business, business function, user groups, or teams. Others may simply just delineate between professional developers and so-called citizen developers — tech-savvy business stakeholders.


The Grim Reality of a Cyberattack: From Probability to Certainty

In the unfortunate event that an organization gets hacked, there are certain actions a cybersecurity team can take (or avoid) that significantly impact recovery time and cost. The first action is to report the incident to all relevant authorities, just as someone would declare a physical crime. Many organizations are legally obligated to report such instances and informing the authorities helps protect other enterprises from similar attacks. It is worth noting that authorities and regulators are not going to assign blame. They seek to learn valuable lessons from attacks and build hacker profiles that help minimize the consequences other organizations may face. In addition, enterprises should alert their cyberinsurance providers. This is often a prerequisite for filing a claim, and evidence must be presented to receive compensation. After the appropriate authorities have been notified, it is important that IT teams slow down and avoid making costly mistakes in haste.


AI revolutionising leadership talent identification

With leadership positions being critical for growth and sustenance, any bias in selecting C-suite people can have a damaging impact on the performance of organizations. AI ensures that the entire process of recruiting leaders remains objective, fair, and just. Unlike humans who are driven by feelings and emotions, AI algorithms solely select the candidates based on their skills, competencies, and qualifications. This leaves little room for any prejudice and allows firms to recruit diversified, dynamic, and vibrant individuals at the top echelons of the organization. ... Not only does AI help in recruiting top-level employees but also helps in predicting their engagement behaviours, attrition patterns, and potential switchover. By combining the employment records of candidates with their present level of engagement, AI can predict the attrition rate at the top leadership positions. For example, AI tools can alert employers about the sudden decrease in employees’ engagement levels or increase in their job search activity online. The information can be used by employers to enhance engagement with their employees, enhance their talent retention efforts, or devise a contingency plan in case of a sudden exit.


If You Want People to Follow You, Stop Being a Boss — 8 Steps to Truly Effective Leadership

The approach to mistakes and failures differentiates a leader from a boss. Where a boss might see a mistake as a failure to be criticized, a leader views it as an opportunity for growth. Positive reinforcement involves recognizing the effort, providing constructive feedback, and encouraging a mindset of continuous learning. This approach not only helps in skill development but also instills a sense of confidence and loyalty within the team, fostering a workplace culture where innovation is encouraged, and risks are viewed as steps towards growth. ... Empowerment is a key trait of effective leadership. It involves trusting the team's capabilities and allowing autonomy in their roles. This empowerment fosters a sense of ownership and responsibility among team members, leading to greater job satisfaction and innovation. In contrast, micromanagement can stifle creativity, lower morale and hinder productivity. Leaders who empower rather than micromanage find their teams are more motivated, creative, and ultimately more effective in achieving organizational goals.


Data governance and government: The need for effective and protective data management

Data governance in government involves establishing and enforcing policies, procedures, and standards to ensure the effective management, use, and protection of data. Several key issues and challenges are commonly faced in the context of data governance in government:Data privacy and security: Governments handle vast amounts of sensitive and personally identifiable information. Ensuring the privacy and security of this data is a paramount concern, especially in the face of increasing cyber threats and data breaches. Compliance with regulations: Governments must adhere to various regulations and compliance standards concerning data management, such as data protection laws, privacy regulations, and industry-specific requirements. Interoperability: Government agencies often operate with disparate systems and databases. Achieving interoperability and ensuring seamless data exchange among different agencies is a significant challenge impacting the efficiency and effectiveness of government services. 


Linus Torvalds on the state of Linux today and how AI figures in its future

Indeed, Torvalds hopes that AI might really help by being able "to find the obvious stupid bugs because a lot of the bugs I see are not subtle bugs. Many of them are just stupid bugs, and you don't need any kind of higher intelligence to find them. But having tools that warn more subtle cases where, for example, it may just say 'this pattern does not look like the regular pattern. Are you sure this is what you need?' And the answer may be 'No, that was not at all what I meant. You found an obvious bag. Thank you very much.' We actually need autocorrects on steroids. I see AI as a tool that can help us be better at what we do." But, "What about hallucinations?," asked Hohndel. Torvalds, who will never stop being a little snarky, said, "I see the bugs that happen without AI every day. So that's why I'm not so worried. I think we're doing just fine at making mistakes on our own." Moving on, Torvalds said, "I enjoy the fact that open source, the notion of openness, has gotten so much more widely accepted. I enjoyed it particularly because I remember what it was thirty years ago when I had started this project, and people would ask me, 'Why?'



Quote for the day:

"To be successful you must accept all challenges that come your way. You can't just accept the ones you like." -- Mike Gafka