Daily Tech Digest - December 17, 2023

Soft Skills Every CISO Needs to Inspire Better Boardroom Relationships

CISOs now need to understand how to communicate with stakeholders and the boards around an incident. The only way to do this is to collaborate not only with chief financial officers (CFOs) to understand what stakeholders want to hear, but also with the legal department to set clear standards with the board on what they define as material. Working together allows the CISO to break down these silos, ensuring close collaboration toward business goals without adding unnecessary cybersecurity risk. If done right, with the appropriate transparency, any additional measures that are needed to combat a new or emerging risk or regulation should be easier to accept. ... CISOs also have to be good storytellers, using data to craft a narrative around how the business is mitigating growing risk. This includes taking a key performance indicator (KPI) — again using language and metrics that the board and other business stakeholders understand — and showcasing whether existing efforts are falling short and, if so, presenting a strategy to improve results. 


AI-Powered Test Case Generation: A Game-Changer for Testers

Unlike traditional methods, AI brings unimagined intelligence to the test case creation process, complementing everything from functional to performance testing services. The process involves active use of Machine learning algorithms to analyze patterns and identify critical scenarios. Besides, Natural Language Processing (NLP) enables AI to comprehend and interpret complex requirements, streamlining the translation of specifications into effective test cases. Additionally, predictive modelling anticipates potential system behaviors, contributing to more comprehensive test coverage. Overall, the amalgamation of advanced technologies empowers AI to autonomously generate test cases, significantly reducing manual efforts and enhancing the precision of test scenarios. As a result, AI not only accelerates the testing lifecycle but also elevates the overall quality and reliability of software applications. By harnessing the capabilities of artificial intelligence, QA service providers and test teams could yield a transformative approach to redefining the traditional test case generation practices.


UK AI National Institute Urges 'Red Lines' For Generative AI

The report singled out autonomous agents as a specific application of generative AI that warrants close oversight in a national security context. Autonomous agents build on LLMs by interacting with their environment and taking actions with little human intervention. The technology has the potential to accelerate national security analysis such as by rapidly processing vast amounts of open-source data, providing preliminary risk assessments and generating hypotheses for human analysts to pursue, the report said. But critics told report authors that the technology falls short of human-level reasoning and can't reproduce the innate understanding of risk that humans use to avoid failure. Among the mitigations the report suggested are recording actions and decision taken by autonomous agents. "The agent architecture must not obscure or undermine any potential aspects of explainability originating from the LLM." It also suggests attaching warnings to "every stage" of generative AI output and documenting what an agent-based system would do in a worst-case scenario.


From Vision to Value: A DevOps Framework for Sustainable Innovation

The landscape of innovation is fertile ground for emerging technologies, which act as enablers and accelerators in the product development lifecycle. The plethora of tools available today — from sophisticated design software to robust development environments — has dramatically reshaped the process of innovation. Technologies such as cloud computing platforms, low-code development environments, and powerful coding frameworks empower organizations to bring ideas to life with unprecedented speed and efficiency. In the spectrum of tooling options, the decision between low-code platforms and traditional coding environments presents a strategic choice for teams. Low-code platforms can significantly reduce the complexity and time involved in creating applications, democratizing the development process and allowing a broader range of professionals to contribute to innovation. This accessibility can accelerate the prototyping phase, enabling rapid iteration and user feedback integration. Conversely, traditional coding remains indispensable for building highly customized and sophisticated systems. 


From Institutions to AI: The Blockchain Trends Emerging for 2024

Technology moves faster than regulation, and banks and regulators must be able to collaborate more quickly and innovate for the technology to succeed, thrive, and benefit real people, says Anthony Moro, CEO of Provenance Blockchain Foundation, which is responsible for the Provenance Blockchain, a Layer 1 blockchain purpose-built for financial services. “2024 will be a period in which regulators gain more familiarity with innovations being developed on-chain and increase participation in experiments and discussion,” he says. In addition, private, permissioned environments are also poised to help streamline banks’ internal operations, including cross-border payments and settlements, according to Moro. They offer a potential solution for banks and financial institutions to participate in the evolving digital economy while adhering to regulatory requirements and maintaining a level of control over their own products and processes. “Banks and even regulators can use permissioned blockchain zones as ‘sandboxes’ to test out new financial products and services in a controlled and safe environment, which ultimately minimizes risks and stays within the confines of existing regulations,” Moro says.


Ditch Brainstorming: Adam Grant's Brainwriting Revolution

Unlike traditional brainstorming sessions, brainwriting levels the playing field and ensures that all team members, regardless of their inclination towards extroversion or introversion get an equal opportunity to contribute. The process of writing ideas not only allows thoughtful consideration but also prevents the overshadowing of quieter voices. ... Written communication drastically minimises the fear of judgment as compared to voicing it in front of the dominant ideas. This helps in fostering an environment where individuals feel comfortable sharing unconventional or "wild" ideas. This can lead to breakthrough innovations that may have been overlooked in a traditional brainstorming setting. ... Brainwriting allows individuals to think more deeply about their ideas before sharing them with the group. This deliberate reflection can result in not only more refined and fully developed concepts but also more confidence in the idea ultimately improving the overall quality of the generated ideas. ... Unlike the sequential nature of verbal brainstorming, multiple ideas can be generated simultaneously by different team members in the process of brainwriting.


If Computer Science Is Doomed, What Comes Next?

But when it comes to AI replacing human programmers, “I think this is all something that we really have to take seriously…” Welsh said. “I don’t think that this is just — I am exaggerating for effect. But the industry is going to change. So the natural question then is, well, what happens when we cut humans out of the loop? How do we build software? How do we ship product?” Welsh ponders the ramifications of this world. Our current code optimizations like readability and reusability “are only because poor humans have to wrangle with this stuff.” But imagine a world where “It doesn’t really matter if it’s duplicative or repetitive or modular or nicely abstracted.” Welsh put up a diagram of how he envisions the software team of the future… Welsh hedges that he’s “not sure” if all of computer science will one day become a historical artifact — but presents his vision of a “plausible” future, with people “not writing programs in the conventional way that we do today, and instead, having an AI do their bidding.” It happens partly through the use of platforms like Fixie, his company’s platform for easily creating AI-based applications.


4 ways to overcome your biggest worries about generative AI

Avivah Litan, distinguished VP analyst at Gartner, says one of the key issues to be aware of is the pressure for change from people outside the IT department. "The business is wanting to charge full steam ahead," she says, referring to the adoption of generative AI tools by professionals across the organization, with or without the say-so of those in charge. "The security and risk people are having a hard time getting their arms around this deployment, keeping track of what people are doing, and managing the risk." As a result, there's a lot of tension between two groups: the people who want to use AI, and the people who need to manage its use. "No one wants to stifle innovation, but the security and risk people have never had to deal with something like this before," she says in a video chat with ZDNET. "Even though AI has been around for years, they didn't have to really worry about any of this technology until the rise of generative AI." Litan says the best way to allay concerns is to create a task force for AI that draws on experts from across the business and which considers privacy, security, and risk.


Why Cloud Auditing Data Federation is important for an enterprise

The Cloud Auditing Data Federation (CADF) facilitates the federation of normative audit event data to and from cloud providers, which is why it is significant. It offers fresh perspectives on the hardware, software, and network infrastructure of the provider that are used to power certain tenant applications in a multi-vendor setting. Regardless of where applications run, on-premises, in a hybrid cloud, or in a public cloud, compliance with corporate policies and industry laws is a crucial component of every organization’s strategy. By making existing cloud and service audit interfaces, technologies, and tools more consistent, compatible, and even functional, CADF seeks to address significant issues. ... Application security (AppSec) is the practice of identifying and reducing the number of security flaws while reducing the probability of successful assault. It addresses every security issue that comes up during the design, creation, and deployment of an application. CADF offers application security certification, self-management, and self-audit in cloud environments, which can assist customers in ensuring compliance with corporate policies and industry laws.


The CISO risk calculus: Navigating the thin line between paranoia and vigilance

Sometimes we forget the critical survival role that paranoia and anxiety have served in the collective survival of our species. Our early ancestors lived in environments filled with predators and other unknown threats. A healthy dose of paranoia enabled them to be more vigilant, helping them detect and avoid potential dangers. The challenge in our modern era is being able to distinguish genuine threats from the endless noise of false alarms, ensuring that our inherited paranoia and anxiety serve us, rather than hinder us. It also requires that we acknowledge and address the human element in the security calculus. ... Security training shouldn’t be a one-off initiative. While establishing robust policies is a crucial first step, it’s unrealistic to expect that people will automatically understand and consistently adhere to them. Human nature is not inherently programmed to retain and act on information presented only once. It’s not merely about providing information; it’s about continuously reinforcing that knowledge through repeated training. 



Quote for the day:

“The first step toward success is taken when you refuse to be a captive of the environment in which you first find yourself.” -- Mark Caine

Daily Tech Digest - December 16, 2023

AI: A Catalyst for Gender Equality in the Workplace

The Equality and Human Rights Commission reports that 77% of mothers have encountered negative or possibly discriminatory experiences during pregnancy, maternity leave, or upon returning to work. The joy of impending motherhood is often tainted by biases, as expecting mothers face subtle exclusions from projects or career advancements. Maternity leave, intended as a sacred period for bonding, becomes tinged with anxiety as women grapple with the fear of being sidelined professionally and the pressure to resume duties prematurely. Returning to the workplace brings feelings of inadequacy and frustration, met with insufficient support for balancing work and family responsibilities. These experiences, rife with frustration and disappointment, mark a daunting struggle for women seeking to re-establish themselves professionally post-maternity leave. However, despite these challenges, women actively choose to re-enter the workforce, embarking on the second phase of their careers post-sabbatical. Addressing these issues requires normative frameworks that ethically tackle the consequences of AI usage.


How to Identify and Address the Challenges of Excessive Business Growth

In other words, when processes start breaking down, and you find yourself constantly in reactive, catch-up mode, it's a sign you need more capacity. The tipping point will vary for each company, but if productivity and quality take a nosedive, growth has become excessive for your present resources. Other red flags include: Customer complaints spike; Employees seem stressed, burned out; You're always scrambling to meet deadlines; Infrastructure creaks under the weight - think cyberattacks, IT failures, supply chain issues; No time for strategy, only tackling emergencies; Costs rising faster than revenue; Profitability declines. Essentially, if growth starts hurting rather than helping, it's time for a change. ... Trying to manage a 100-person company like a 10-person startup will lead to chaos. But running a 10-person shop like a rigid 100-person bureaucracy will cause frustration. Align your leadership style, organizational structure, systems, and talent to your current size and growth needs.


AI Pushes Universities to Modernize IT Infrastructure

The convenience and accessibility of those technologies have created new demands for higher-quality and customizable learning experiences in higher education. According to data from McKinsey, 60% of students report that classroom learning technologies such as generative AI, machine learning and supercomputing have improved their learning and grades since COVID-19 began. In addition to using AI in classrooms, institutions can implement AI solutions in their IT decision-making to create a reliable, secure data infrastructure. As AI becomes more mainstream in higher education operations, universities can better understand, invest and apply AI-specific solutions to their IT needs. While investing in AI and the technology to support it, universities can improve operations, offering faster innovation and better student, faculty and researcher experiences. ... With demand for advanced technological offerings at universities becoming commonplace, IT teams face new challenges under small bud/gets. Many require modern IT infrastructure to support increasingly large datasets required for groundbreaking insights from research teams.


Future-proofing the digital rupee

Several factors contributed to the inception of India's CBDC. The global competition for CBDC development, coupled with the enthusiasm among nations to embrace digital solutions, played a pivotal role. The introduction of India's CBDC, the digital rupee, might have been influenced, at least partially, by the rising prevalence of cryptocurrencies, especially stablecoins. The Deputy Governor of the Reserve Bank of India (RBI) emphasised the need for caution in permitting such instruments. While stablecoins offer certain advantages, their applicability is confined to a limited number of developed countries. The success of UPI in India has raised questions about the necessity of deploying CBDCs in the country, perhaps making it look like an inconspicuous addition to an already largely developed payments landscape. The RBI Deputy Governor cited the ascent of cryptocurrencies and concerns about policy sovereignty as one of the reasons for considering CBDCs, along with improving digital transactions. However, India presents a unique case with the well-established UPI system already in place.


How to lock down backup infrastructure

The first thing to do is to protect the privileged accounts in your backup system. First, separate these accounts from any centralized login system you use, such as Active Directory, because these systems are sometimes compromised. Create as much of a firewall between that production system and the backup system as possible. And, of course, use a safe password, and do not use any passwords for these accounts that are used anywhere else. (Personally I would use a password manager to support having a different password everywhere.) Finally, make sure that any such logins are protected by multi-factor authentication, and use the best option available. Avoid the use of email or SMS-based MFA, as it is easily foiled by an experienced hacker. Try to use an OTP-based system of some kind, such as Google Authenticator, Symantec VIP, or Yubikey. Also investigate if your backup system has enhanced authentication for dangerous actions, such as deletion of backups before their scheduled expiration, or restoration of any data to anywhere other than where it was originally created. The first is used to easy delete backups from your backup system, without setting off any alarms, and the second is used to exfiltrate data by restoring it to a system the hacker controls.


Fortifying cyber defenses: A proactive approach to ransomware resilience

Instead of investing time in formulating non-binding pledges rather than working on actionable solutions, the US Government should adopt a more proactive stance by directly procuring advanced cybersecurity tools. These tools, which have been developed to keep data safe and stop ransomware attacks, exist and are continually evolving. By spearheading the implementation, through investment and education, the government can set a powerful example for the private sector to follow, thereby reinforcing the nation’s cyber infrastructure. The effectiveness of such tools is not hypothetical: they have been tested and proven in various cybersecurity battlegrounds. They range from advanced threat detection systems that use artificial intelligence to identify potential threats before they strike, to automated response solutions that can protect data on infected systems and networks, preventing the lateral spread of ransomware. Investing in these tools would not only enhance the government’s defensive capabilities but would also stimulate the cybersecurity industry, encouraging innovation and development of even more effective defenses.


Cloud squatting: How attackers can use deleted cloud assets against you

The risk from cloud squatting issues can even be inherited from third-party software components. In June, researchers from Checkmarx warned that attackers are scanning npm packages for references to S3 buckets. If they find a bucket that no longer exists, they register it. In many cases the developers of those packages chose to use an S3 bucket to store pre-compiled binary files that are downloaded and executed during the package’s installation. So, if attackers re-register the abandoned buckets, they can perform remote code execution on the systems of the users trusting the affected npm package because they can host their own malicious binaries. ... The attack surface is very large, but organizations need to start somewhere and the sooner the better. The IP reuse and DNS scenario seems to be the most widespread and can be mitigated in several ways: by using reserved IP addresses from a cloud provider which means they won’t be released back into the shared pool until the organization explicitly releases them, by transferring their own IP addresses to the cloud, by using private (internal) IP addresses between services when users don’t need to directly access those servers, or by using IPv6 addresses if offered by the cloud provider because their number is so large that they’re unlikely to ever be reused.


Data Leaders Say ‘AI Paralysis’ Stifling Adoption: Study

While AI is not new in the data industry, the public’s fascination with generative AI has fueled a veritable gold rush for industries to adopt the emerging technologies for a competitive advantage. But the lack of safety guidelines and organizational framework and training may be suffocating AI adoption efforts, according to the report. ... “What happened is everybody got ahold of the GenAI hammer, and now everything looks like a nail,” she says, adding that CIOs and CDOs must do their best to articulate the technical needs to non-technical members of the C-suite. “I do think there’s a disconnect between the CIO and CDO and the chief executive. We should not, in the data and technology space, expect people to understand the layer of complexity that we have to deal with. What we should be doing is taking that complexity and creating a story and a narrative, so it makes sense to the other people in our organization and businesses we work with.” The report also showed that data governance has stalled just as AI is being adopted across industries.


Artificial Intelligence Governance & Alignment with Enterprise Governance

The Objectives of the AI Governance are: Ensure enterprise is adopted pre-trained foundation models and complied; Guide the decision-making process to maintain AI Solution coherence; Maintain the relevancy of the enterprise to meet changing requirements ... The AI Governance Framework helps Enterprise to Manage, Govern, Monitor, and Adopt AI activities, practices, and systems across enterprise. AI Governance Framework defines a set of metrics that can be used to measure the success of the framework implementation. ... Establish an executive team for identifying and overseeing the AI initiatives across the enterprise. Define a clear vision and strategy for AI implementation aligned with the enterprise goals and business functions. Develop practical communications to, and appropriate access for employees. Setup AI Governance across enterprise. Define roles and responsibilities of individuals involved in AI development, deployment and monitoring. Foster the collaboration between AI experts, domain experts and business stakeholders. Establish a centralized, cross-functional team to review and update AI governance practices as technology, regulations, and enterprise needs.


Role of digital in risk management and compliances

Embracing risks is crucial for survival, as risks are inherent in every aspect of business, whether financial or non-financial. As Mark Zuckerberg says, “The only strategy that is guaranteed to fail is not taking risks.” However, this leads to a fundamental question: should businesses pursue risks solely in pursuit of higher returns? Going beyond the pursuit of returns alone, businesses in today’s context should focus on Return of capital and not just Return on capital. Business is about taking calculated risks and managing risks to achieve business goals. Risk exposures must be strategically crafted, with a comprehensive risk management framework in place. We piloted technology-enabled compliance way back in 2015, starting with an India-centric compliance tool that has now been implemented across the global organisation. The tool aids informed decision-making and swift response to emerging risks. The digital solution facilitates seamless communication and collaboration between dispersed teams, ensuring a coordinated approach to risk management. 



Quote for the day:

"Your job gives you authority. Your behavior gives you respect." -- Irwin Federman

Daily Tech Digest - December 14, 2023

Moral Machines: The Importance of Ethics in Generative AI

A transparent model can provide better functionality than an opaque model, as it provides users with explanations for its outputs. An opaque model does not need to explain its reasoning process, which introduces risk and potential liability if unexpected or inaccurate results are provided by a generative AI tool. This lack of visibility also makes opaque models more difficult to test than their transparent counterparts. As such, it’s important to consider generative AI tools with high transparency when working to build ethical systems. Explainability of AI models is another important aspect of creating ethical systems yet challenging to control. AI models, specifically deep learning models, use thousands upon thousands of parameters when creating an output. This type of process can be nearly impossible to track from beginning to end, which limits user visibility. Lack of explainability has already been demonstrated in real-world problems; we’ve seen many examples of AI hallucinations, such as the Bard chatbot error in February 2023, which occurs when a model provides an output that is entirely false or implausible.


12 Software Architecture Pitfalls and How to Avoid Them

Reusing an existing architecture is seldom successful unless the QARs for the new architecture match the ones that an existing architecture was designed to meet. Past performance is no guarantee of future success! Reusing part of an existing application to implement an MVP rapidly may constrain its associated MVA by including legacy technologies in its design. Extending existing components in order to reuse them may complicate their design and make their maintenance more difficult and therefore more expensive. ... Architectural work is problem-solving, with the additional skill of being able to make trade-offs informed by experience in solving particular kinds of problems. Developers who have not had experience solving architectural problems will learn, but they will make a lot of mistakes before they do. ... While new technologies offer interesting capabilities, they always come with trade-offs and unintended side effects. The new technologies don’t fundamentally or magically make meeting QARs unimportant or trivial; in many cases the ability of new technologies to meet QARs is completely unknown.


CIOs weigh the new economics and risks of cloud lock-in

“It is true that hyperscale cloud providers have hit such a critical mass that they create their own gravitational pull,” he says. “Once you adopt their cloud platforms, it can be difficult and expensive to migrate out. [But] CIOs today have more choice in cloud providers than ever. It is no longer a decision between AWS and Azure. Google has been successfully executing a strategy to attract more enterprise customers. Even Oracle has made the transition from focusing on in-house technology to become a full-service cloud provider.” CIOs may consider other approaches, McCarthy adds, such as selecting a single-tenant cloud solution offered by HPE or Dell, which bundle hardware and software in an as-a-service business model that gives CIOs more cloud options. “Another alternative includes colocation companies like Equinix, which has been offering bare-metal IaaS for several years and has now created a partnership with VMware to extend those services higher up the stack,” he says, adding that CIOs should not view a cloud provider “as a location but rather as an operating model that can be deployed in service provider data centers, on-premise, or at the edge.”


Understanding the True Cost of a Data Breach in 2023

Data breaches are common in the modern world, which means even if your organization hasn’t suffered one, the chances of it happening aren’t negligible. Criminal groups stand to profit significantly from these actions, so they are innovative and invest time and money to conduct highly advanced attacks. This means that a data breach doesn’t simply appear one second and then disappear the next. An IBM report noted the average breach cycle lasts for 287 days, with businesses taking 212 days to detect it and an additional 75 to neutralize the threat. Every organization should implement preventative measures to combat threat actors. This means building and exercising safe practices, like storing information securely, adhering to clear policies and training staff to understand data protection. Ultimately, the longer a breach continues, the more expensive it becomes. The Cost of a Data Breach Report 2023 found that companies that contain a breach within 30 days save over $1 million in contrast to those that take longer, so it pays to have a strong recovery process in place.


Fortifying confidential computing in Microsoft Azure

Adding GPU support to confidential VMs is a big change, as it expands the available compute capabilities. Microsoft’s implementation is based on Nvidia H100 GPUs, which are commonly used to train, tune, and run various AI models including computer vision and language processing. The confidential VMs allow you to use private information as a training set, for example training a product evaluation model on prototype components before a public unveiling, or working with medical data, training a diagnostic tool on X-ray or other medical imagery. Instead of embedding a GPU in a VM, and then encrypting the whole VM, Azure keeps the encrypted GPU separate from your confidential computing instance, using encrypted messaging to link the two. Both operate in their own trusted execution environments (TEE), ensuring that your data remains secure. Conceptually this is no different from using an external GPU over Thunderbolt or another PCI bus. Microsoft can allocate GPU resources as needed, with the GPU TEE ensuring that its dedicated memory and configuration are secured.


From reactive to proactive: Always-ready CFD data center analysis

By synchronizing with these toolsets, digital twin models can pull all relevant, necessary data and update accordingly. The data includes objects on the floor plan, assets in the racks, power chain connections, historical power, and environmental readings, and perforated tile and return grate locations. Therefore, the digital twin model is always ready to run the next predictive scenario with current data and minimal supervision from the operational team. As part of the routine output from the software, DataCenter Digital Twin produces Excel-ready reports, capacity dashboards, CFD reports, and go/no-go planning analysis. Teams can then use this information to evaluate future capacity plans, conduct sensitivity studies (such as redundant failure or transient power failure), and run energy optimization studies as needed. Much of this functionality is available through an intuitive and accessible web portal. We know that every organization has a unique set of problems, priorities, and workflows. As such, we’ve split DataCenter Insight Platform into two offerings – DataCenter Asset Twin and DataCenter Digital Twin.


AI-Powered Encryption: A New Era in Cybersecurity

AI-powered encryption represents a groundbreaking advancement in cybersecurity, leveraging the capabilities of artificial intelligence to strengthen data protection. At its core, AI-powered encryption utilizes machine learning algorithms to continuously analyze and adapt to new cyber threats, making it an incredibly dynamic and proactive defense mechanism. By employing AI-driven pattern recognition and predictive analytics, this encryption method can rapidly identify potential vulnerabilities and create tailored encryption protocols to thwart would-be attackers. One key aspect of AI-powered encryption is its ability to autonomously adjust security parameters in real-time based on evolving risk factors. This adaptability ensures that data remains secure even as cyber threats become more sophisticated. Moreover, the integration of AI enables encryption systems to swiftly detect anomalies or suspicious activities within the network, providing an extra layer of defense against unauthorized access or data breaches. 


7 Best Practices for Developers Getting Started with GenAI

Experiment (and encourage your team to experiment) with GenAI tools and code-gen solutions, such as GitHub Copilot, which integrates with every popular IDE and acts as a pair programmer. Copilot offers programmers suggestions, helps troubleshoot code and generates entire functions, making it faster and easier to learn and adapt to GenAI. A word of warning when you first use these off-the-shelf tools: Be wary of using proprietary or sensitive company data, even when just feeding the tool a prompt. Gen AI vendors may store and use your data for use in future training runs, a major no-no for your company’s data policy and info-security protocol. ... One of the first steps to deploying GenAI well is to master writing prompts, which is both an art and a science. While prompt engineer is an actual job description, it’s also a good moniker for anyone looking to improve their use of AI. A good prompt engineer knows how to develop, refine and optimize text prompts to get the best results and improve the overall AI system performance. Prompt engineering doesn’t require a particular degree or background, but those doing it need to be skilled at explaining things well. 


Could Your Organization Benefit from Hyperautomation?

Building a sophisticated hyperautomation ecosystem requires a significant technology investment, Manders says. “Additionally, the integration of multiple technologies and tools, inherent in hyperautomation, can usher in increased complexity, making ecosystem maintenance a challenging endeavor.” Failing to establish clear goal and governance guidelines can also create serious challenges. Automation without governance could lead individual departments to create their own automation processes, which may conflict with other departments’ processes. The resulting hyperautomation silos could lead to some departments failing to take advantage of solutions fellow departments have already deployed. Additionally, every time an organization transports data to another process or platform, there’s the risk of data leaks. “If we don’t follow best practices and ensure that data is secure, this information could fall into the wrong hands,” Rahn warns. Hyperautomation may also lead adopters to dependency on a particular vendor’s ecosystem of tools and technologies. 


How insurtech is using artificial intelligence

As insurers look to become more customer centric, the coupling of AI with advanced analytics can help provide a more specific, personalised and real-time picture of insurance customers. With insurance customers coming to rely on online platforms for purchasing and managing their policies for such a particular commodity, interactions with the firms themselves are few and far and between, which can water down the user experience. However, experience orchestration — the leveraging of customer data and AI by insurance companies to create highly personalised interactions — can be implemented to improve relations long-term. Manan Sagar, global head of insurance at Genesys, explains ... This approach not only improves the customer experience but also enhances employee efficiency by automating tasks or routing calls more effectively. “As the insurance industry navigates the digital age, experience orchestration can serve as a powerful tool to uphold the tradition of trust and personal relationships that have long defined the industry. Through this, firms can differentiate themselves in an increasingly commoditised market and ensure their customers remain loyal and satisfied.”



Quote for the day:

"A true leader always keeps an element of surprise up his sleeve which others cannot grasp but which keeps his public excited and breathless." -- Charles de Gaulle

Daily Tech Digest - December 13, 2023

The tide comes in for subsea cable networks

Our subsea networks are a victim of the problem, but they are also a contributor - as is every industrialized sector. Nicole Starosielski, author of The Undersea Network and subsea cable lead principal investigator for Sustainable Subsea Networks, openly criticized the less sustainable aspects of subsea cables, while acknowledging the difficulty that Sustainable Subsea Networks has had in actually quantifying the sector. “It’s a difficult process, generating a carbon footprint of the [subsea cable networks] industry. Unlike a data center which has four walls where you can draw your boundary, the cable industry is comprised of so many elements - from the landing station to cable annexation,” said Starosielski. “There are all these other pieces that the industry is trying to account for. One is obviously a marine aspect. You have a fleet of ships that are older, and there's not a lot of overhead and margin in the supply side of the marine sector. Google has money to build cables, but you don't see SubCom, ASN, and NEC running around with a lot of extra cash to build new ships.”


Five Things for Risk Professionals to Put on Their 2024 To-Do List

Organizations face a critical question: how can they stay ahead of unforeseen challenges? This requires understanding and adapting to emerging risks—like those new, evolving threats that arise from disruptive technology and changing regulatory landscapes. So, let’s consider this scenario: a technology firm faces a sudden regulatory change, impacting its operations. ... This is where organizational resilience becomes pivotal, transforming challenges into opportunities. But how do risk professionals identify emerging risks, particularly those associated with disruptive technologies? This lies in fostering a mindset that emphasizes continuous learning and constant monitoring of risks. This approach is complemented by innovative methods such as agile risk assessments and scenario analysis. Moreover, ISACA plays an instrumental role by providing access to a global network of expertise, supporting the risk community with dialogue about technology-focused risk analysis, digital literacy and understanding of the ethical and regulatory aspects of new technologies.


How C-Level Executives Can Increase Cyber Resilience

First things first: To secure your organization’s C-suite, start by putting basic security measures in place. All executive accounts should be secured using multifactor authentication (MFA). Avoid relying on SMS, as it can be compromised more easily than other options. Second, a thorough audit is crucial to determine what access privileges the CEO and other executive officers currently have. Given the unpredictable demands on their time, senior executives might have been granted access to key systems outside of predefined time windows. However, this added flexibility comes with risks. Any access that senior executives have to new products or proprietary information should be on a temporary basis to eliminate the potential for long-term vulnerabilities. It is also vital to implement robust monitoring, logging and alerting to oversee such access and ensure it is used legitimately. Third, the least privilege approach should also apply to senior executives. For example, C-level executives are more concerned about overall sales trends than the details around each deal, so there is generally no need for them to have write or modify permissions for the CRM or other critical databases.


The intersection of telehealth and AI: How can they reinforce each other?

AI tools help streamline the triage process, making it more user-friendly as well. It begins by collecting basic information like demographics and risk factors, followed by inquiries about the patient's primary symptoms, covering a wide age range from newborns to adults. ... Currently, AI tools are not authorized to diagnose patients. Despite the remarkable progress in generative AI, we must remain cautious about their practical application in healthcare. Our blood pressure cuffs are certified medical devices, and it's noteworthy that while AI tools possess significant capabilities, they are not subject to the same regulatory rigor. It's critical to establish a robust regulatory framework to guide and set standards for AI-assisted diagnosis in the future. This includes addressing key challenges like ensuring maximum transparency in AI decision-making processes and tackling issues related to bias and inaccuracies. I believe the ideal path forward is to position AI tools as optimal supporters for both patients and healthcare providers.


How to Effectively Draft Data Processing Agreements to Protect Information Shared with Service Providers

Privacy laws may impose notice requirements, remediation obligations and penalties on data controllers for privacy breaches. Thus, establishing clear sets of obligations for data processors in the case of a security breach can allow data controllers to meet their own legal obligations. Data controllers should expand the DPA provisions for security breach obligations to include any security incident or misuse of the data by the data processor or its personnel. This obligation should include both real and suspected incidents as this allows for mitigation efforts to be deployed early on by the data controller rather than waiting for a confirmation of a security incident, which can take several weeks depending on the complexity of the required forensic investigation. Data controllers should include security control provisions in the DPA setting out the steps to be taken by a data processor to secure sensitive data and respond to data incidents. Depending on the nature and sensitivity of the data, data controllers may lay out more specific steps to be taken before or after a security incident. 


EU’s AI Act: Europe’s New Rules for Artificial Intelligence

Developers of AI systems deemed to be high risk will have to meet certain obligations set by European lawmakers, including mandatory assessment of how their AI systems might impact the fundamental rights of citizens. This applies to the insurance and banking sectors, as well as any AI systems with “significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law.” AI models that are considered high-impact and pose a systemic risk – meaning they could cause widespread problems if things go wrong – must follow more stringent rules. Developers of these systems will be required to perform evaluations of their models, as well as “assess and mitigate systemic risks, conduct adversarial testing, report to the (European) Commission on serious incidents, ensure cybersecurity and report on their energy efficiency.” Additionally, European citizens will have a right to launch complaints and receive explanations about decisions made by high-risk AI systems that impact their rights. To support European startups in creating their own AI models, the AI Act also promotes regulatory sandboxes and real-world-testing. 
SEI platforms empower managers to gain real-time insights into their team’s progress, eliminating unnecessary meetings and constant check-ins. By breaking down silos and providing a clear view of everyone’s workload, SEI platforms foster greater team autonomy, allowing them to receive assistance when needed so they can operate more efficiently. ... Even in highly efficient and high-performing organizations, some projects may experience delays or budget overruns, and it can be hard to understand and communicate why. SEI platforms can help leaders spot recurring bottlenecks or inefficiencies and work with their teams to improve the relevant processes. They also make it possible to test the efficacy of process changes. ... Specific metrics provided by SEI platforms allow engineering leaders to assess the quality of their team’s work, evaluate code review practices, and maintain stability and efficiency in software delivery. Visualizations of trends, patterns, and correlations from these metrics offer valuable insights to engineering leaders, leading to informed decision-making.


Shifting data protection regulations show why businesses must put privacy at their core

It isn’t just legislators pressuring businesses to take their data privacy responsibilities seriously. Public awareness of how data is collected, utilized and shared is on the rise, affecting consumer behavior accordingly. Publicity around the EU General Data Protection Regulation (GDPR) played a very important role in educating consumers in the UK about data privacy, with 79% of UK consumers saying that transparency about how their data is collected and used is important to them. But they also recognize the value of their data, with 61% of UK consumers viewing their personal information as an asset that can be used to negotiate better prices and offers with companies. And there is growing evidence that US consumers are increasingly privacy aware. According to DataGrail’s Privacy Trends 2023 report, DSRs – privacy requests submitted by data subjects to access or modify the data a company holds on them – grew by 72% year-on-year between 2021 and 2022. Of these requests, 52% came from people living in states without enacted privacy laws.


Hiring sentiment seems positive for Q4 after witnessing sluggishness in Q3

Consumer and retail companies will see a resurgence in Q4 from muted demand in semi-urban and rural areas in the festive season in Q3. While the report carries positive sentiment for the financial services sector, we would observe cautious moves from banks, NBFCs and Fintechs with increased regulatory pressure from the RBI on lending norms for riskier credits. According to the report findings, H2 is projecting positive incremental hiring, including workforce expansion, new hiring, and replacement hiring. This surge in workforce expansion can be attributed to government policies and initiatives aimed at fortifying employment opportunities and cultivating a business-friendly environment. Notably, India experienced a remarkable 7.8% surge in GDP during the first quarter of the fiscal year 2023-24 (Q1 FY24). This robust GDP growth underscores a potent economic rebound, driving the acceleration in incremental hiring across the nation. The report dives into the multifaceted factors that influence employment in India. According to the data, economic conditions significantly impact the employment environment, as cited by 69% of respondents.


Is the UK-US data bridge doomed to fail?

While experts agree that improvements have been made compared to previous efforts, concerns about the legislation remain. The Open Rights Group has argued that the data breach will “betray UK democratic values, and position the UK as a data-laundering heaven pushing for a global privacy race to the bottom”. “This approach doesn’t only fail to provide a long-term, pragmatic solution to international data transfers, but would further the UK’s reputation as an ‘international rogue actor’ that recent UK Governments have advanced throughout the years,” writes Mariano delli Santi, a data protection expert at ORG. The ICO has also been quick to highlight specific areas that could pose risks to data subjects in the UK. The watchdog has raised concerns about certain terminology used and also recommends monitoring the implementation of the UK-US data bridge generally, to ensure it operates as intended. For example, the ICO points out that the UK-US data bridge does not name all the special category data defined in Article 9 of UK GDPR, such as biometric, genetic, criminal offense, or sexual orientation data.



Quote for the day:

'Leadership occurs any time you attempt to influence the thinking, development of beliefs of somebody else.'' -- Dr. Ken Blanchard

Daily Tech Digest - December 12, 2023

The SEC action against SolarWinds highlights how tough it can get for CISOs

The SEC has accused Brown of misleading investors by not disclosing "known risks" and not accurately representing the company’s cybersecurity measures during and before the 2020 Sunburst cyberattack that affected thousands of customers in government agencies and companies globally. ... The claimed failures, including not abiding by the statements that the company made on its website regarding their patterns and practices for developing their software as well as password policies internal to the company. The SEC complains in its filings that the company did not disclose cybersecurity risks independently from other risks, given SolarWinds' role and industry, nor pay extra risk attention to targeted attacks and the disclosure needs surrounding them. ... The SEC also indicated that SolarWinds did not limit administrative access to those who needed access. Too often in developer networks administrative rights are used too widely and not limited. Internal staff expressed concerns that user access would lead to losses of organizational assets and personal data.


Deriving Actionable Insights and ROI from AI

To increase the ROI of AI, large language models (LLMs) must ingest clean and high-quality data for accurate, meaningful insights. This is only possible by investing in data discovery and data classification solutions and processes. Organizations will also face growing AI-related security challenges in 2024. This will lead them to set up guardrails that protect corporate and customer data. Businesses must also consider that company-specific or proprietary data ingested by LLMs could put organizations at risk if company financials or other private information are replicated to a public AI engine and exposed. ... There are many opportunities for businesses to benefit from AI; however, there also needs to be a rapid evolution of data classification and data life cycle management before businesses will be able to derive the value they expect from AI. This is especially important if companies are trying to justify ROI from their AI investment. Sustainability took a back seat during the pandemic and long after the worst of it passed, as organizations made major adjustments to operations and tried to find their new (or old) normal.


Startup Oxide Computer puts the ‘cloud’ back in on-prem private clouds

Oxide's main mission is to put the "cloud" back in private cloud computing. The company is built on the premise that you should be able to choose to rent or own cloud capacity, depending on the workload, not losing the benefits of cloud computing like elasticity when you choose the latter. To accomplish this, the Oxide team set out to build an entirely new cloud hardware rack that would deliver all of the advantages public cloud vendors enjoy, without sacrificing on control, efficiency, and flexibility. Another issue that limits private clouds is that many enterprises attempt to build their private clouds on Kubernetes. The problem is that Kubernetes was not designed for multitenancy, and, thus, it does not offer a true cloud experience. That's not a knock on Kubernetes, but the container orchestration software is typically deployed on top of bloated layers of software, adding complexity and making it difficult to manage at scale. ... According to Oxide, this design allows enterprises to be fully deployed within a few hours of unboxing the system, versus what typically takes weeks or months using the "kit car" build of OEM hardware.

We’re in a truly important phase of change due to the impact of artificial intelligence. In fact, I believe people have been quite amazed at how good it is. Even industry professionals have been quite surprised at how powerful it is. But it comes with dangers, and I think that’s the important point I talked about a few years ago and still find very important today; you really need to understand how this works to use it effectively, because you still have to understand that it works statistically, in the sense that it understands what the most probable words are to follow the paragraphs it has already seen. ... I think we will have to ask the question of whether we are developing a new species, whether this is an evolution of what we are doing, or whether we are going to have to consider a new hybrid species, which is probably the perspective of integrating artificial intelligence into our own species. Elon Musk is considering the idea with Neuralink. His response to the existential threat of artificial intelligence is that no, we must become it, we must integrate artificial intelligence and humans, which will generate new philosophical, social, and legal dilemmas in the future.


Quantum-Computing Approach Uses Single Molecules As Qubits For First Time

Some of the first demonstrations of the basic principles of quantum computing, in the late 1990s, used large numbers of molecules manipulated in a solution inside a nuclear magnetic resonance machine. Since then, researchers have developed a variety of other platforms for quantum computing, including superconducting circuits and individual ions held in a vacuum. Each of these objects is used as the fundamental unit of quantum information, or qubit — the quantum equivalent of the bits in classical computers. In the past few years, another strong contender has emerged, in which the qubits are made of neutral atoms — as opposed to ions — trapped with highly focused laser-beam ‘tweezers’. Now, two separate teams have made early progress towards using this approach with molecules instead of atoms. “Molecules have a bit more complexity, which means they offer new ways to both encode quantum information, and also new ways in which they can interact,” says Lawrence Cheuk


DevOps and Automation

Continuous Integration (CI) and Continuous Deployment (CD) are critical components of DevOps software development. CI is the practice of automating the integration of code changes from multiple contributors into a single software project. It is typically implemented in such a way that it triggers an automated build with testing, with the goals of quickly detecting and fixing bugs, improving software quality, and reducing release time. After the build stage, CD extends CI by automatically deploying all code changes to a testing and/or production environment. This means that, in addition to automated testing, the release process is also automated, allowing for a more efficient and streamlined path to delivering new features and updates to users. Docker and Kubernetes are frequently used to improve efficiency and consistency in CI/CD workflows. The code is first built into a Docker container, which is then pushed to a registry in the CI stage. During the CD stage, Kubernetes retrieves the Docker container from the registry and deploys it to the appropriate environment, whether testing, staging, or production. 


Unveiling the 'Willingness Pyramid' across generations and cities

The 'Willingness Pyramid' stands as a compelling representation of the shifting attitudes towards work models in bustling metropolises. At its core, it underscores a marked inclination towards the "only office" work model, with a notable emphasis on the younger, tech-savvy workforce in these urban hubs. For the emerging generations of Late millennials and Gen Z, the physical office environment is not merely a place of work but a dynamic space that fosters productivity, innovation, and collaboration. The younger workforce's enthusiastic embrace of the "only office" model is rooted in a confluence of factors. Raised in the digital age, these individuals have grown up with technology seamlessly integrated into their lives. As a result, they perceive the physical office as a hub for face-to-face interactions, spontaneous idea exchanges, and a breeding ground for innovation that cannot be fully replicated in remote settings. The office, for them, is not just a workplace but a social and creative nexus. This trend, however, exhibits a nuanced dynamic as we traverse through different age groups within the same urban landscapes.


Why FinOps Must Focus on Value, Not Just Cost

It’s not necessarily the fault of those teams. Rather, FinOps in its earliest iterations has suffered from some of the same problems that plagued its predecessors — namely, an approach to cost management that focuses on the “what” — how much the bill says you spent, and only after it arrives — versus the “why,” which should accurately reflect the business reasons for consuming cloud resources in the first place, as well as the results those choices produced. Moreover, FinOps, even while the name suggests tight collaboration, often still relies on fragmented and retroactive processes and information, according to Williams. ... Moving to a value-focused approach to FinOps is akin to the “shift left” mindset that is increasingly popular in security and other IT domains that have historically been dealt with via lagging indicators. Some organizations try mandating discipline with regard to cloud usage — the “do this or else” approach. “That never works,” Williams said. Instead, consumption policies and technical guardrails need to be implemented before resources are ever provisioned and deployed.


Researchers Unmask Sandman APT's Hidden Link to China-Based KEYPLUG Backdoor

"We did not observe concrete technical indicators confirming the involvement of a shared vendor or digital quartermaster in the case of LuaDream and KEYPLUG," Aleksandar Milenkoski, senior threat researcher at SentinelLabs, told The Hacker News. "However, given the observed indicators of shared development practices, and overlaps in functionalities and design, we do not exclude that possibility. Noteworthy is the prevalence of similar cases within the Chinese threat landscape, indicating there could be established internal and/or external channels for supplying malware to operational teams." "The order in which LuaDream and KEYPLUG evaluate the configured protocol among HTTP, TCP, WebSocket, and QUIC is the same: HTTP, TCP, WebSocket, and QUIC in that order," the researchers said. "The high-level execution flows of LuaDream and KEYPLUG are very similar." The adoption of Lua is another sign that threat actors, both nation-state aligned and cybercrime-focused, are increasingly setting their sights on uncommon programming languages like DLang and Nim to evade detection and persist in victim environments for extended periods of time.


The skills and traits of elite CTOs

Strategic thinking is essential for a CTO to align technology initiatives with the overall business strategy, Kowsari says. “A successful CTO should be able to set a clear technology vision, identify opportunities for innovation, and make informed decisions that drive the company’s growth,” he says. Without a strategic mindset, technology investments and initiatives might lack direction and fail to contribute to the organization’s success, Kowsari says. The ability to set and execute a vision is indeed a fundamental aspect of the CTO’s role, Clemenson says. “This encompasses aspects such as design, funding, efficient resource allocation, buy vs. build strategy, while simultaneously emphasizing short- and long-term considerations,” he says. At the same time, CTOs must be strong leaders. “CTOs are responsible for leading and managing technology teams,” Kowsari says. “Strong leadership and team-building skills are vital. Effective team management can lead to increased productivity, innovative solutions, and the successful execution of technology projects. It also helps retain top technical talent, which is essential in a competitive job market.”



Quote for the day:

"Don't be buffaloed by experts and elites. Experts often possess more data than judgement." -- Colin Powell