Daily Tech Digest - December 18, 2023

How to Select the Right Industry Cloud for Your Business

One of the biggest mistakes IT leaders make when shopping for an industry cloud is searching for a solution without first constructing a holistic strategy, Campbell says. He recommends focusing on areas that will maximize the overall investment value, including data management and security operations, while ensuring both business and IT buy-in. Due to multiple factors, including, compliance, business continuity, customer trust, and financial health, cybersecurity should be a central consideration when assessing industry clouds, says Nigel Gibbons, a director and partner with cyber threat consultancy NCC Group. ... Gibbons adds that it’s also important to be aware of data sovereignty requirements and the impact of laws on where and how data is stored, particularly for businesses operating internationally. To ensure tight alignment with both present and future business goals it’s important to choose a forward-looking provider, Gibbons says. “It’s essential to future-proof investments by choosing a provider that regularly innovates and updates its offerings.”


What to do when receiving unprompted MFA OTP codes

When receiving an unprompted 2FA code, the account holder should assume their credentials were stolen and log directly into Amazon, without clicking on any links in text messages or emails, to change their password. If that same password is used with any of your other accounts, it should also be changed immediately on those sites. It is also important to not think that since 2FA protected your account you no longer need to change your password. This is a false sense of security, as threat actors have figured out ways to bypass MFA in the past, so there is no reason to give them the opportunity to do so with your account. Furthermore, while SMS and email 2FA provide extra protection to your accounts, they are the most risky MFA method to use. This is because if someone gains access to your email or phone number, such as through a SIM swapping attack, they'll also have access to your OTP codes. This would allow them to reset your password without you knowing until it was too late. Instead, if a site provides support for authentication apps, hardware security keys, or passkeys, you should use one of these options instead as they’ll require attackers to have access to your device to pass the multi-factor authentication challenge.


Chilling on the Edge: Navigating the challenges of cooling Edge data centers

“In order to manage the complete value supply chain for critical Edge applications, service support is critical for our end user customers. Our products are designed with full consideration of service access and maintenance processes,” he adds. With the global warming phenomenon, summer ambient temperatures are rising globally, with the UK even seeing thermometers reaching 40 degrees Celsius in some parts in recent times. “The result is that design considerations for standard products require a summer ambient operation up to 50 degrees ambient in most markets now. This can be exacerbated when we take into account microclimates, where you have a large population of equipment working together, further increasing the local ambient temperature” says Ansari. Increasingly, he adds, greenfield sites are also abandoning raised floor designs in favour of maximising the indoor cooling space and creating a larger floor-to-ceiling area. “This seems to have become the norm for Edge and, increasingly for colo,” he adds. This is ideal for our latest fan wall cooling range, AireWall ONE™, which is a parametric design suitable for horizontal airflow and configurable to maximise design options.


EU AI Act agreed: 5 key considerations for businesses for the road ahead

A company may use AI in a variety of ways, such uses falling into different risk-based categories under the AI Act. Therefore, a ‘one size fits all’ AI governance strategy may not be appropriate. When structuring an AI governance team, businesses should consider including individuals from a range of existing teams to ensure that the requirements of the AI Act can be fully met. For example, although certain requirements will be familiar to privacy teams (e.g. risk and impact assessments), when it comes to AI there is a level of technical knowledge needed relating to testing and monitoring of systems, oversight and transparency requirements. ... The AI Act will not exist in a vacuum and is not the beginning and end for AI governance. It must be read alongside other laws in the regulatory landscape e.g. GDPR. The interplay with privacy is clear, given that data is at the heart of AI systems. This inextricable link is demonstrated by, for example, the provisions in the GDPR on automated decision-making. Earlier this month we saw the first judgment where the CJEU interpreted Article 22 GDPR when deciding what constitutes ‘automated decision-making’


Unpacking The Rise of AI: Its Potential, Its Disruptions, and What It Entails in the Near Future

The timely and cost-effective results produced by AI have already made a host of businesses replace their human resources with technology, while many others have started contemplating the same. One of the recent examples is the replacement of humans with bots in customer service by businesses mainly to save costs and redirect them towards their core business. AI-driven tools are also better equipped to study customer feedback and aid businesses and business leaders in identifying customer preferences and making informed decisions. Meanwhile, AI has also found its way into the healthcare and finance sectors. In healthcare, AI has improved diagnostics, personalised treatment plans, and drug discovery, fostering more effective and targeted medical interventions. In finance, AI algorithms analyse vast datasets to enhance decision-making, risk management, and fraud detection. Moreover, according to Goldman Sachs, about 300 million people could potentially lose their jobs due to automation and technologies like generative AI. Consequently, there are concerns among professionals and aspiring students about the potential automation within their domains and the resultant loss of work.


Surviving the cyber arms race in the age of generative AI

It's critical that industry and government continually evaluate the guardrails in place to protect the public from unrestrained use of AI, whether by cybercriminals or established organizations. The EO promises to develop standards that will ensure AI systems are safe and tested against a rigorous set of qualifications. These qualifications and standards will require refinement over time to become truly standardized. The US Department of Commerce will also develop guidance for watermarking and content authentication to clearly label AI-generated content, while companies like Alphabet, Meta, and OpenAI have already made commitments to implement such measures. This approach is resonant with how the US Secret Service got manufacturers of color copiers and printers to include digital watermarks on printed pages after the copiers became advanced enough to counterfeit money. However, it does bring its own unique set of challenges for bad actors to misuse. To ensure the responsible development and deployment of AI technologies, the evolution of our legislative framework must continue. With transparency, visibility, and understanding as cornerstones, the tech industry and government can work together to mitigate risk and counteract threats.


Building A Lasting Data Management Strategy Requires A Data-First Mindset

Without the data owners' participation, this project won't work. They're the experts in the processes underpinned by the data, whether it’s procurement, marketing, production or another department. They bring a functional view to the project. The migration is just a means to an end. If you don’t do it in the context of the business process, you’re just moving ones and zeros. There’s no value creation. The other side of the coin is the technical people, those who work closely with the line of business owners to execute the migration. These are the IT people who understand the tools, the steps and what needs to happen next. ... As IT and business teams struggle to do more with less, there'll be increased pressure to make the ROI case even before purchasing new tools. Historically, there's been a missing link between tool implementation and recognition at the executive level of the tool’s importance. Data management is a technical challenge for many enterprises, one that's primarily internal. Poor governance and a lack of monitoring are the primary factors cited as the causes of faulty data. As a result, the opportunity resides in a more comprehensive grasp of data and a more potent means of driving change so that data matches up with corporate goals.


9 ways to keep your developer team happy

Good feedback is important in any type of job, and software development is no different. Programmers want to know how they are doing and what they could do to improve. Developers also want to know whether the products they create are beneficial to users and profitable for their companies. An important part of feedback is recognition. This can be informal, such as a team leader paying a compliment for a successful project, or formal, such as a reward or perk for work well done. Public recognition among peers is also important. “Regular recognition and constructive feedback for their contributions are essential for a developer's happiness,” James says. “Feeling appreciated and acknowledged for their hard work and expertise can significantly boost job satisfaction.” ... Developers want to work on projects that push the edge of innovation, such as software that leverages AI and machine learning capabilities. They also want to build products that make a difference. Knowing that their organization stands out in the market is a source of pride and satisfaction. Developers "feel happy when they are allowed to work on innovative solutions,” says Vinika Garg, COO of Webomaze, an SEO agency.


The Three Most Important Emerging AI Trends in Data Analytics

As AI-enabled applications performing analytics are spun up, it is increasingly critical that the training and production data sets are unbiased and incorruptible. Bad training or production data sets that are biased or just out of date can lead the system to make bad recommendations and worse decisions. Ensuring the safety of the data includes a legal process (asking the firm to guarantee that the data in the repository isn’t owned by someone else who might take exception to its use) and some form of indemnification. The use of indemnification isn’t consistent, however, with some of the more mature firms indemnifying their customers and some of the other firms asking for indemnification from their customers. ... AI is very expensive to run in the cloud because it uses substantial processing and storage resources. However, if you can shift the load to the client, it frees up those resources and allows for faster results with some loss of trainability and customization as, typically, the clients use a compressed data set and inferencing that is more limited than the capabilities of a cloud implementation. 


Creating a formula for effective vulnerability prioritization

Systems should operate continuously and collect live data to drive vulnerability prioritization efforts based on actual usage. Traditional vulnerability systems, on the other hand, typically collect information periodically – on-demand, weekly, and even monthly. However, the lack of current exposure context can lead to resourcing and security gaps. This causes a significant human resource overhead and creates security gaps since the information doesn’t present a current map of the organization’s exposure. Automated and continuous prioritization adapts to a dynamically changing attack surface. In turn, teams gain greater accuracy with less reliance on manual data collection and analysis. Automated systems allow for greater capacity to digest more (and higher priority) data and better leverage existing resources. In parallel, organizations should consider deploying patchless protection to reduce their attack surface until patches are deployed. Patchless protection protects known vulnerabilities that haven’t been patched yet while preventing unknown vulnerabilities from causing damage.



Quote for the day:

“If you don’t value your time, neither will others. Stop giving away your time and talents. Value what you know & start charging for it.” -- Kim Garst

Daily Tech Digest - December 17, 2023

Soft Skills Every CISO Needs to Inspire Better Boardroom Relationships

CISOs now need to understand how to communicate with stakeholders and the boards around an incident. The only way to do this is to collaborate not only with chief financial officers (CFOs) to understand what stakeholders want to hear, but also with the legal department to set clear standards with the board on what they define as material. Working together allows the CISO to break down these silos, ensuring close collaboration toward business goals without adding unnecessary cybersecurity risk. If done right, with the appropriate transparency, any additional measures that are needed to combat a new or emerging risk or regulation should be easier to accept. ... CISOs also have to be good storytellers, using data to craft a narrative around how the business is mitigating growing risk. This includes taking a key performance indicator (KPI) — again using language and metrics that the board and other business stakeholders understand — and showcasing whether existing efforts are falling short and, if so, presenting a strategy to improve results. 


AI-Powered Test Case Generation: A Game-Changer for Testers

Unlike traditional methods, AI brings unimagined intelligence to the test case creation process, complementing everything from functional to performance testing services. The process involves active use of Machine learning algorithms to analyze patterns and identify critical scenarios. Besides, Natural Language Processing (NLP) enables AI to comprehend and interpret complex requirements, streamlining the translation of specifications into effective test cases. Additionally, predictive modelling anticipates potential system behaviors, contributing to more comprehensive test coverage. Overall, the amalgamation of advanced technologies empowers AI to autonomously generate test cases, significantly reducing manual efforts and enhancing the precision of test scenarios. As a result, AI not only accelerates the testing lifecycle but also elevates the overall quality and reliability of software applications. By harnessing the capabilities of artificial intelligence, QA service providers and test teams could yield a transformative approach to redefining the traditional test case generation practices.


UK AI National Institute Urges 'Red Lines' For Generative AI

The report singled out autonomous agents as a specific application of generative AI that warrants close oversight in a national security context. Autonomous agents build on LLMs by interacting with their environment and taking actions with little human intervention. The technology has the potential to accelerate national security analysis such as by rapidly processing vast amounts of open-source data, providing preliminary risk assessments and generating hypotheses for human analysts to pursue, the report said. But critics told report authors that the technology falls short of human-level reasoning and can't reproduce the innate understanding of risk that humans use to avoid failure. Among the mitigations the report suggested are recording actions and decision taken by autonomous agents. "The agent architecture must not obscure or undermine any potential aspects of explainability originating from the LLM." It also suggests attaching warnings to "every stage" of generative AI output and documenting what an agent-based system would do in a worst-case scenario.


From Vision to Value: A DevOps Framework for Sustainable Innovation

The landscape of innovation is fertile ground for emerging technologies, which act as enablers and accelerators in the product development lifecycle. The plethora of tools available today — from sophisticated design software to robust development environments — has dramatically reshaped the process of innovation. Technologies such as cloud computing platforms, low-code development environments, and powerful coding frameworks empower organizations to bring ideas to life with unprecedented speed and efficiency. In the spectrum of tooling options, the decision between low-code platforms and traditional coding environments presents a strategic choice for teams. Low-code platforms can significantly reduce the complexity and time involved in creating applications, democratizing the development process and allowing a broader range of professionals to contribute to innovation. This accessibility can accelerate the prototyping phase, enabling rapid iteration and user feedback integration. Conversely, traditional coding remains indispensable for building highly customized and sophisticated systems. 


From Institutions to AI: The Blockchain Trends Emerging for 2024

Technology moves faster than regulation, and banks and regulators must be able to collaborate more quickly and innovate for the technology to succeed, thrive, and benefit real people, says Anthony Moro, CEO of Provenance Blockchain Foundation, which is responsible for the Provenance Blockchain, a Layer 1 blockchain purpose-built for financial services. “2024 will be a period in which regulators gain more familiarity with innovations being developed on-chain and increase participation in experiments and discussion,” he says. In addition, private, permissioned environments are also poised to help streamline banks’ internal operations, including cross-border payments and settlements, according to Moro. They offer a potential solution for banks and financial institutions to participate in the evolving digital economy while adhering to regulatory requirements and maintaining a level of control over their own products and processes. “Banks and even regulators can use permissioned blockchain zones as ‘sandboxes’ to test out new financial products and services in a controlled and safe environment, which ultimately minimizes risks and stays within the confines of existing regulations,” Moro says.


Ditch Brainstorming: Adam Grant's Brainwriting Revolution

Unlike traditional brainstorming sessions, brainwriting levels the playing field and ensures that all team members, regardless of their inclination towards extroversion or introversion get an equal opportunity to contribute. The process of writing ideas not only allows thoughtful consideration but also prevents the overshadowing of quieter voices. ... Written communication drastically minimises the fear of judgment as compared to voicing it in front of the dominant ideas. This helps in fostering an environment where individuals feel comfortable sharing unconventional or "wild" ideas. This can lead to breakthrough innovations that may have been overlooked in a traditional brainstorming setting. ... Brainwriting allows individuals to think more deeply about their ideas before sharing them with the group. This deliberate reflection can result in not only more refined and fully developed concepts but also more confidence in the idea ultimately improving the overall quality of the generated ideas. ... Unlike the sequential nature of verbal brainstorming, multiple ideas can be generated simultaneously by different team members in the process of brainwriting.


If Computer Science Is Doomed, What Comes Next?

But when it comes to AI replacing human programmers, “I think this is all something that we really have to take seriously…” Welsh said. “I don’t think that this is just — I am exaggerating for effect. But the industry is going to change. So the natural question then is, well, what happens when we cut humans out of the loop? How do we build software? How do we ship product?” Welsh ponders the ramifications of this world. Our current code optimizations like readability and reusability “are only because poor humans have to wrangle with this stuff.” But imagine a world where “It doesn’t really matter if it’s duplicative or repetitive or modular or nicely abstracted.” Welsh put up a diagram of how he envisions the software team of the future… Welsh hedges that he’s “not sure” if all of computer science will one day become a historical artifact — but presents his vision of a “plausible” future, with people “not writing programs in the conventional way that we do today, and instead, having an AI do their bidding.” It happens partly through the use of platforms like Fixie, his company’s platform for easily creating AI-based applications.


4 ways to overcome your biggest worries about generative AI

Avivah Litan, distinguished VP analyst at Gartner, says one of the key issues to be aware of is the pressure for change from people outside the IT department. "The business is wanting to charge full steam ahead," she says, referring to the adoption of generative AI tools by professionals across the organization, with or without the say-so of those in charge. "The security and risk people are having a hard time getting their arms around this deployment, keeping track of what people are doing, and managing the risk." As a result, there's a lot of tension between two groups: the people who want to use AI, and the people who need to manage its use. "No one wants to stifle innovation, but the security and risk people have never had to deal with something like this before," she says in a video chat with ZDNET. "Even though AI has been around for years, they didn't have to really worry about any of this technology until the rise of generative AI." Litan says the best way to allay concerns is to create a task force for AI that draws on experts from across the business and which considers privacy, security, and risk.


Why Cloud Auditing Data Federation is important for an enterprise

The Cloud Auditing Data Federation (CADF) facilitates the federation of normative audit event data to and from cloud providers, which is why it is significant. It offers fresh perspectives on the hardware, software, and network infrastructure of the provider that are used to power certain tenant applications in a multi-vendor setting. Regardless of where applications run, on-premises, in a hybrid cloud, or in a public cloud, compliance with corporate policies and industry laws is a crucial component of every organization’s strategy. By making existing cloud and service audit interfaces, technologies, and tools more consistent, compatible, and even functional, CADF seeks to address significant issues. ... Application security (AppSec) is the practice of identifying and reducing the number of security flaws while reducing the probability of successful assault. It addresses every security issue that comes up during the design, creation, and deployment of an application. CADF offers application security certification, self-management, and self-audit in cloud environments, which can assist customers in ensuring compliance with corporate policies and industry laws.


The CISO risk calculus: Navigating the thin line between paranoia and vigilance

Sometimes we forget the critical survival role that paranoia and anxiety have served in the collective survival of our species. Our early ancestors lived in environments filled with predators and other unknown threats. A healthy dose of paranoia enabled them to be more vigilant, helping them detect and avoid potential dangers. The challenge in our modern era is being able to distinguish genuine threats from the endless noise of false alarms, ensuring that our inherited paranoia and anxiety serve us, rather than hinder us. It also requires that we acknowledge and address the human element in the security calculus. ... Security training shouldn’t be a one-off initiative. While establishing robust policies is a crucial first step, it’s unrealistic to expect that people will automatically understand and consistently adhere to them. Human nature is not inherently programmed to retain and act on information presented only once. It’s not merely about providing information; it’s about continuously reinforcing that knowledge through repeated training. 



Quote for the day:

“The first step toward success is taken when you refuse to be a captive of the environment in which you first find yourself.” -- Mark Caine

Daily Tech Digest - December 16, 2023

AI: A Catalyst for Gender Equality in the Workplace

The Equality and Human Rights Commission reports that 77% of mothers have encountered negative or possibly discriminatory experiences during pregnancy, maternity leave, or upon returning to work. The joy of impending motherhood is often tainted by biases, as expecting mothers face subtle exclusions from projects or career advancements. Maternity leave, intended as a sacred period for bonding, becomes tinged with anxiety as women grapple with the fear of being sidelined professionally and the pressure to resume duties prematurely. Returning to the workplace brings feelings of inadequacy and frustration, met with insufficient support for balancing work and family responsibilities. These experiences, rife with frustration and disappointment, mark a daunting struggle for women seeking to re-establish themselves professionally post-maternity leave. However, despite these challenges, women actively choose to re-enter the workforce, embarking on the second phase of their careers post-sabbatical. Addressing these issues requires normative frameworks that ethically tackle the consequences of AI usage.


How to Identify and Address the Challenges of Excessive Business Growth

In other words, when processes start breaking down, and you find yourself constantly in reactive, catch-up mode, it's a sign you need more capacity. The tipping point will vary for each company, but if productivity and quality take a nosedive, growth has become excessive for your present resources. Other red flags include: Customer complaints spike; Employees seem stressed, burned out; You're always scrambling to meet deadlines; Infrastructure creaks under the weight - think cyberattacks, IT failures, supply chain issues; No time for strategy, only tackling emergencies; Costs rising faster than revenue; Profitability declines. Essentially, if growth starts hurting rather than helping, it's time for a change. ... Trying to manage a 100-person company like a 10-person startup will lead to chaos. But running a 10-person shop like a rigid 100-person bureaucracy will cause frustration. Align your leadership style, organizational structure, systems, and talent to your current size and growth needs.


AI Pushes Universities to Modernize IT Infrastructure

The convenience and accessibility of those technologies have created new demands for higher-quality and customizable learning experiences in higher education. According to data from McKinsey, 60% of students report that classroom learning technologies such as generative AI, machine learning and supercomputing have improved their learning and grades since COVID-19 began. In addition to using AI in classrooms, institutions can implement AI solutions in their IT decision-making to create a reliable, secure data infrastructure. As AI becomes more mainstream in higher education operations, universities can better understand, invest and apply AI-specific solutions to their IT needs. While investing in AI and the technology to support it, universities can improve operations, offering faster innovation and better student, faculty and researcher experiences. ... With demand for advanced technological offerings at universities becoming commonplace, IT teams face new challenges under small bud/gets. Many require modern IT infrastructure to support increasingly large datasets required for groundbreaking insights from research teams.


Future-proofing the digital rupee

Several factors contributed to the inception of India's CBDC. The global competition for CBDC development, coupled with the enthusiasm among nations to embrace digital solutions, played a pivotal role. The introduction of India's CBDC, the digital rupee, might have been influenced, at least partially, by the rising prevalence of cryptocurrencies, especially stablecoins. The Deputy Governor of the Reserve Bank of India (RBI) emphasised the need for caution in permitting such instruments. While stablecoins offer certain advantages, their applicability is confined to a limited number of developed countries. The success of UPI in India has raised questions about the necessity of deploying CBDCs in the country, perhaps making it look like an inconspicuous addition to an already largely developed payments landscape. The RBI Deputy Governor cited the ascent of cryptocurrencies and concerns about policy sovereignty as one of the reasons for considering CBDCs, along with improving digital transactions. However, India presents a unique case with the well-established UPI system already in place.


How to lock down backup infrastructure

The first thing to do is to protect the privileged accounts in your backup system. First, separate these accounts from any centralized login system you use, such as Active Directory, because these systems are sometimes compromised. Create as much of a firewall between that production system and the backup system as possible. And, of course, use a safe password, and do not use any passwords for these accounts that are used anywhere else. (Personally I would use a password manager to support having a different password everywhere.) Finally, make sure that any such logins are protected by multi-factor authentication, and use the best option available. Avoid the use of email or SMS-based MFA, as it is easily foiled by an experienced hacker. Try to use an OTP-based system of some kind, such as Google Authenticator, Symantec VIP, or Yubikey. Also investigate if your backup system has enhanced authentication for dangerous actions, such as deletion of backups before their scheduled expiration, or restoration of any data to anywhere other than where it was originally created. The first is used to easy delete backups from your backup system, without setting off any alarms, and the second is used to exfiltrate data by restoring it to a system the hacker controls.


Fortifying cyber defenses: A proactive approach to ransomware resilience

Instead of investing time in formulating non-binding pledges rather than working on actionable solutions, the US Government should adopt a more proactive stance by directly procuring advanced cybersecurity tools. These tools, which have been developed to keep data safe and stop ransomware attacks, exist and are continually evolving. By spearheading the implementation, through investment and education, the government can set a powerful example for the private sector to follow, thereby reinforcing the nation’s cyber infrastructure. The effectiveness of such tools is not hypothetical: they have been tested and proven in various cybersecurity battlegrounds. They range from advanced threat detection systems that use artificial intelligence to identify potential threats before they strike, to automated response solutions that can protect data on infected systems and networks, preventing the lateral spread of ransomware. Investing in these tools would not only enhance the government’s defensive capabilities but would also stimulate the cybersecurity industry, encouraging innovation and development of even more effective defenses.


Cloud squatting: How attackers can use deleted cloud assets against you

The risk from cloud squatting issues can even be inherited from third-party software components. In June, researchers from Checkmarx warned that attackers are scanning npm packages for references to S3 buckets. If they find a bucket that no longer exists, they register it. In many cases the developers of those packages chose to use an S3 bucket to store pre-compiled binary files that are downloaded and executed during the package’s installation. So, if attackers re-register the abandoned buckets, they can perform remote code execution on the systems of the users trusting the affected npm package because they can host their own malicious binaries. ... The attack surface is very large, but organizations need to start somewhere and the sooner the better. The IP reuse and DNS scenario seems to be the most widespread and can be mitigated in several ways: by using reserved IP addresses from a cloud provider which means they won’t be released back into the shared pool until the organization explicitly releases them, by transferring their own IP addresses to the cloud, by using private (internal) IP addresses between services when users don’t need to directly access those servers, or by using IPv6 addresses if offered by the cloud provider because their number is so large that they’re unlikely to ever be reused.


Data Leaders Say ‘AI Paralysis’ Stifling Adoption: Study

While AI is not new in the data industry, the public’s fascination with generative AI has fueled a veritable gold rush for industries to adopt the emerging technologies for a competitive advantage. But the lack of safety guidelines and organizational framework and training may be suffocating AI adoption efforts, according to the report. ... “What happened is everybody got ahold of the GenAI hammer, and now everything looks like a nail,” she says, adding that CIOs and CDOs must do their best to articulate the technical needs to non-technical members of the C-suite. “I do think there’s a disconnect between the CIO and CDO and the chief executive. We should not, in the data and technology space, expect people to understand the layer of complexity that we have to deal with. What we should be doing is taking that complexity and creating a story and a narrative, so it makes sense to the other people in our organization and businesses we work with.” The report also showed that data governance has stalled just as AI is being adopted across industries.


Artificial Intelligence Governance & Alignment with Enterprise Governance

The Objectives of the AI Governance are: Ensure enterprise is adopted pre-trained foundation models and complied; Guide the decision-making process to maintain AI Solution coherence; Maintain the relevancy of the enterprise to meet changing requirements ... The AI Governance Framework helps Enterprise to Manage, Govern, Monitor, and Adopt AI activities, practices, and systems across enterprise. AI Governance Framework defines a set of metrics that can be used to measure the success of the framework implementation. ... Establish an executive team for identifying and overseeing the AI initiatives across the enterprise. Define a clear vision and strategy for AI implementation aligned with the enterprise goals and business functions. Develop practical communications to, and appropriate access for employees. Setup AI Governance across enterprise. Define roles and responsibilities of individuals involved in AI development, deployment and monitoring. Foster the collaboration between AI experts, domain experts and business stakeholders. Establish a centralized, cross-functional team to review and update AI governance practices as technology, regulations, and enterprise needs.


Role of digital in risk management and compliances

Embracing risks is crucial for survival, as risks are inherent in every aspect of business, whether financial or non-financial. As Mark Zuckerberg says, “The only strategy that is guaranteed to fail is not taking risks.” However, this leads to a fundamental question: should businesses pursue risks solely in pursuit of higher returns? Going beyond the pursuit of returns alone, businesses in today’s context should focus on Return of capital and not just Return on capital. Business is about taking calculated risks and managing risks to achieve business goals. Risk exposures must be strategically crafted, with a comprehensive risk management framework in place. We piloted technology-enabled compliance way back in 2015, starting with an India-centric compliance tool that has now been implemented across the global organisation. The tool aids informed decision-making and swift response to emerging risks. The digital solution facilitates seamless communication and collaboration between dispersed teams, ensuring a coordinated approach to risk management. 



Quote for the day:

"Your job gives you authority. Your behavior gives you respect." -- Irwin Federman

Daily Tech Digest - December 14, 2023

Moral Machines: The Importance of Ethics in Generative AI

A transparent model can provide better functionality than an opaque model, as it provides users with explanations for its outputs. An opaque model does not need to explain its reasoning process, which introduces risk and potential liability if unexpected or inaccurate results are provided by a generative AI tool. This lack of visibility also makes opaque models more difficult to test than their transparent counterparts. As such, it’s important to consider generative AI tools with high transparency when working to build ethical systems. Explainability of AI models is another important aspect of creating ethical systems yet challenging to control. AI models, specifically deep learning models, use thousands upon thousands of parameters when creating an output. This type of process can be nearly impossible to track from beginning to end, which limits user visibility. Lack of explainability has already been demonstrated in real-world problems; we’ve seen many examples of AI hallucinations, such as the Bard chatbot error in February 2023, which occurs when a model provides an output that is entirely false or implausible.


12 Software Architecture Pitfalls and How to Avoid Them

Reusing an existing architecture is seldom successful unless the QARs for the new architecture match the ones that an existing architecture was designed to meet. Past performance is no guarantee of future success! Reusing part of an existing application to implement an MVP rapidly may constrain its associated MVA by including legacy technologies in its design. Extending existing components in order to reuse them may complicate their design and make their maintenance more difficult and therefore more expensive. ... Architectural work is problem-solving, with the additional skill of being able to make trade-offs informed by experience in solving particular kinds of problems. Developers who have not had experience solving architectural problems will learn, but they will make a lot of mistakes before they do. ... While new technologies offer interesting capabilities, they always come with trade-offs and unintended side effects. The new technologies don’t fundamentally or magically make meeting QARs unimportant or trivial; in many cases the ability of new technologies to meet QARs is completely unknown.


CIOs weigh the new economics and risks of cloud lock-in

“It is true that hyperscale cloud providers have hit such a critical mass that they create their own gravitational pull,” he says. “Once you adopt their cloud platforms, it can be difficult and expensive to migrate out. [But] CIOs today have more choice in cloud providers than ever. It is no longer a decision between AWS and Azure. Google has been successfully executing a strategy to attract more enterprise customers. Even Oracle has made the transition from focusing on in-house technology to become a full-service cloud provider.” CIOs may consider other approaches, McCarthy adds, such as selecting a single-tenant cloud solution offered by HPE or Dell, which bundle hardware and software in an as-a-service business model that gives CIOs more cloud options. “Another alternative includes colocation companies like Equinix, which has been offering bare-metal IaaS for several years and has now created a partnership with VMware to extend those services higher up the stack,” he says, adding that CIOs should not view a cloud provider “as a location but rather as an operating model that can be deployed in service provider data centers, on-premise, or at the edge.”


Understanding the True Cost of a Data Breach in 2023

Data breaches are common in the modern world, which means even if your organization hasn’t suffered one, the chances of it happening aren’t negligible. Criminal groups stand to profit significantly from these actions, so they are innovative and invest time and money to conduct highly advanced attacks. This means that a data breach doesn’t simply appear one second and then disappear the next. An IBM report noted the average breach cycle lasts for 287 days, with businesses taking 212 days to detect it and an additional 75 to neutralize the threat. Every organization should implement preventative measures to combat threat actors. This means building and exercising safe practices, like storing information securely, adhering to clear policies and training staff to understand data protection. Ultimately, the longer a breach continues, the more expensive it becomes. The Cost of a Data Breach Report 2023 found that companies that contain a breach within 30 days save over $1 million in contrast to those that take longer, so it pays to have a strong recovery process in place.


Fortifying confidential computing in Microsoft Azure

Adding GPU support to confidential VMs is a big change, as it expands the available compute capabilities. Microsoft’s implementation is based on Nvidia H100 GPUs, which are commonly used to train, tune, and run various AI models including computer vision and language processing. The confidential VMs allow you to use private information as a training set, for example training a product evaluation model on prototype components before a public unveiling, or working with medical data, training a diagnostic tool on X-ray or other medical imagery. Instead of embedding a GPU in a VM, and then encrypting the whole VM, Azure keeps the encrypted GPU separate from your confidential computing instance, using encrypted messaging to link the two. Both operate in their own trusted execution environments (TEE), ensuring that your data remains secure. Conceptually this is no different from using an external GPU over Thunderbolt or another PCI bus. Microsoft can allocate GPU resources as needed, with the GPU TEE ensuring that its dedicated memory and configuration are secured.


From reactive to proactive: Always-ready CFD data center analysis

By synchronizing with these toolsets, digital twin models can pull all relevant, necessary data and update accordingly. The data includes objects on the floor plan, assets in the racks, power chain connections, historical power, and environmental readings, and perforated tile and return grate locations. Therefore, the digital twin model is always ready to run the next predictive scenario with current data and minimal supervision from the operational team. As part of the routine output from the software, DataCenter Digital Twin produces Excel-ready reports, capacity dashboards, CFD reports, and go/no-go planning analysis. Teams can then use this information to evaluate future capacity plans, conduct sensitivity studies (such as redundant failure or transient power failure), and run energy optimization studies as needed. Much of this functionality is available through an intuitive and accessible web portal. We know that every organization has a unique set of problems, priorities, and workflows. As such, we’ve split DataCenter Insight Platform into two offerings – DataCenter Asset Twin and DataCenter Digital Twin.


AI-Powered Encryption: A New Era in Cybersecurity

AI-powered encryption represents a groundbreaking advancement in cybersecurity, leveraging the capabilities of artificial intelligence to strengthen data protection. At its core, AI-powered encryption utilizes machine learning algorithms to continuously analyze and adapt to new cyber threats, making it an incredibly dynamic and proactive defense mechanism. By employing AI-driven pattern recognition and predictive analytics, this encryption method can rapidly identify potential vulnerabilities and create tailored encryption protocols to thwart would-be attackers. One key aspect of AI-powered encryption is its ability to autonomously adjust security parameters in real-time based on evolving risk factors. This adaptability ensures that data remains secure even as cyber threats become more sophisticated. Moreover, the integration of AI enables encryption systems to swiftly detect anomalies or suspicious activities within the network, providing an extra layer of defense against unauthorized access or data breaches. 


7 Best Practices for Developers Getting Started with GenAI

Experiment (and encourage your team to experiment) with GenAI tools and code-gen solutions, such as GitHub Copilot, which integrates with every popular IDE and acts as a pair programmer. Copilot offers programmers suggestions, helps troubleshoot code and generates entire functions, making it faster and easier to learn and adapt to GenAI. A word of warning when you first use these off-the-shelf tools: Be wary of using proprietary or sensitive company data, even when just feeding the tool a prompt. Gen AI vendors may store and use your data for use in future training runs, a major no-no for your company’s data policy and info-security protocol. ... One of the first steps to deploying GenAI well is to master writing prompts, which is both an art and a science. While prompt engineer is an actual job description, it’s also a good moniker for anyone looking to improve their use of AI. A good prompt engineer knows how to develop, refine and optimize text prompts to get the best results and improve the overall AI system performance. Prompt engineering doesn’t require a particular degree or background, but those doing it need to be skilled at explaining things well. 


Could Your Organization Benefit from Hyperautomation?

Building a sophisticated hyperautomation ecosystem requires a significant technology investment, Manders says. “Additionally, the integration of multiple technologies and tools, inherent in hyperautomation, can usher in increased complexity, making ecosystem maintenance a challenging endeavor.” Failing to establish clear goal and governance guidelines can also create serious challenges. Automation without governance could lead individual departments to create their own automation processes, which may conflict with other departments’ processes. The resulting hyperautomation silos could lead to some departments failing to take advantage of solutions fellow departments have already deployed. Additionally, every time an organization transports data to another process or platform, there’s the risk of data leaks. “If we don’t follow best practices and ensure that data is secure, this information could fall into the wrong hands,” Rahn warns. Hyperautomation may also lead adopters to dependency on a particular vendor’s ecosystem of tools and technologies. 


How insurtech is using artificial intelligence

As insurers look to become more customer centric, the coupling of AI with advanced analytics can help provide a more specific, personalised and real-time picture of insurance customers. With insurance customers coming to rely on online platforms for purchasing and managing their policies for such a particular commodity, interactions with the firms themselves are few and far and between, which can water down the user experience. However, experience orchestration — the leveraging of customer data and AI by insurance companies to create highly personalised interactions — can be implemented to improve relations long-term. Manan Sagar, global head of insurance at Genesys, explains ... This approach not only improves the customer experience but also enhances employee efficiency by automating tasks or routing calls more effectively. “As the insurance industry navigates the digital age, experience orchestration can serve as a powerful tool to uphold the tradition of trust and personal relationships that have long defined the industry. Through this, firms can differentiate themselves in an increasingly commoditised market and ensure their customers remain loyal and satisfied.”



Quote for the day:

"A true leader always keeps an element of surprise up his sleeve which others cannot grasp but which keeps his public excited and breathless." -- Charles de Gaulle

Daily Tech Digest - December 13, 2023

The tide comes in for subsea cable networks

Our subsea networks are a victim of the problem, but they are also a contributor - as is every industrialized sector. Nicole Starosielski, author of The Undersea Network and subsea cable lead principal investigator for Sustainable Subsea Networks, openly criticized the less sustainable aspects of subsea cables, while acknowledging the difficulty that Sustainable Subsea Networks has had in actually quantifying the sector. “It’s a difficult process, generating a carbon footprint of the [subsea cable networks] industry. Unlike a data center which has four walls where you can draw your boundary, the cable industry is comprised of so many elements - from the landing station to cable annexation,” said Starosielski. “There are all these other pieces that the industry is trying to account for. One is obviously a marine aspect. You have a fleet of ships that are older, and there's not a lot of overhead and margin in the supply side of the marine sector. Google has money to build cables, but you don't see SubCom, ASN, and NEC running around with a lot of extra cash to build new ships.”


Five Things for Risk Professionals to Put on Their 2024 To-Do List

Organizations face a critical question: how can they stay ahead of unforeseen challenges? This requires understanding and adapting to emerging risks—like those new, evolving threats that arise from disruptive technology and changing regulatory landscapes. So, let’s consider this scenario: a technology firm faces a sudden regulatory change, impacting its operations. ... This is where organizational resilience becomes pivotal, transforming challenges into opportunities. But how do risk professionals identify emerging risks, particularly those associated with disruptive technologies? This lies in fostering a mindset that emphasizes continuous learning and constant monitoring of risks. This approach is complemented by innovative methods such as agile risk assessments and scenario analysis. Moreover, ISACA plays an instrumental role by providing access to a global network of expertise, supporting the risk community with dialogue about technology-focused risk analysis, digital literacy and understanding of the ethical and regulatory aspects of new technologies.


How C-Level Executives Can Increase Cyber Resilience

First things first: To secure your organization’s C-suite, start by putting basic security measures in place. All executive accounts should be secured using multifactor authentication (MFA). Avoid relying on SMS, as it can be compromised more easily than other options. Second, a thorough audit is crucial to determine what access privileges the CEO and other executive officers currently have. Given the unpredictable demands on their time, senior executives might have been granted access to key systems outside of predefined time windows. However, this added flexibility comes with risks. Any access that senior executives have to new products or proprietary information should be on a temporary basis to eliminate the potential for long-term vulnerabilities. It is also vital to implement robust monitoring, logging and alerting to oversee such access and ensure it is used legitimately. Third, the least privilege approach should also apply to senior executives. For example, C-level executives are more concerned about overall sales trends than the details around each deal, so there is generally no need for them to have write or modify permissions for the CRM or other critical databases.


The intersection of telehealth and AI: How can they reinforce each other?

AI tools help streamline the triage process, making it more user-friendly as well. It begins by collecting basic information like demographics and risk factors, followed by inquiries about the patient's primary symptoms, covering a wide age range from newborns to adults. ... Currently, AI tools are not authorized to diagnose patients. Despite the remarkable progress in generative AI, we must remain cautious about their practical application in healthcare. Our blood pressure cuffs are certified medical devices, and it's noteworthy that while AI tools possess significant capabilities, they are not subject to the same regulatory rigor. It's critical to establish a robust regulatory framework to guide and set standards for AI-assisted diagnosis in the future. This includes addressing key challenges like ensuring maximum transparency in AI decision-making processes and tackling issues related to bias and inaccuracies. I believe the ideal path forward is to position AI tools as optimal supporters for both patients and healthcare providers.


How to Effectively Draft Data Processing Agreements to Protect Information Shared with Service Providers

Privacy laws may impose notice requirements, remediation obligations and penalties on data controllers for privacy breaches. Thus, establishing clear sets of obligations for data processors in the case of a security breach can allow data controllers to meet their own legal obligations. Data controllers should expand the DPA provisions for security breach obligations to include any security incident or misuse of the data by the data processor or its personnel. This obligation should include both real and suspected incidents as this allows for mitigation efforts to be deployed early on by the data controller rather than waiting for a confirmation of a security incident, which can take several weeks depending on the complexity of the required forensic investigation. Data controllers should include security control provisions in the DPA setting out the steps to be taken by a data processor to secure sensitive data and respond to data incidents. Depending on the nature and sensitivity of the data, data controllers may lay out more specific steps to be taken before or after a security incident. 


EU’s AI Act: Europe’s New Rules for Artificial Intelligence

Developers of AI systems deemed to be high risk will have to meet certain obligations set by European lawmakers, including mandatory assessment of how their AI systems might impact the fundamental rights of citizens. This applies to the insurance and banking sectors, as well as any AI systems with “significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law.” AI models that are considered high-impact and pose a systemic risk – meaning they could cause widespread problems if things go wrong – must follow more stringent rules. Developers of these systems will be required to perform evaluations of their models, as well as “assess and mitigate systemic risks, conduct adversarial testing, report to the (European) Commission on serious incidents, ensure cybersecurity and report on their energy efficiency.” Additionally, European citizens will have a right to launch complaints and receive explanations about decisions made by high-risk AI systems that impact their rights. To support European startups in creating their own AI models, the AI Act also promotes regulatory sandboxes and real-world-testing. 
SEI platforms empower managers to gain real-time insights into their team’s progress, eliminating unnecessary meetings and constant check-ins. By breaking down silos and providing a clear view of everyone’s workload, SEI platforms foster greater team autonomy, allowing them to receive assistance when needed so they can operate more efficiently. ... Even in highly efficient and high-performing organizations, some projects may experience delays or budget overruns, and it can be hard to understand and communicate why. SEI platforms can help leaders spot recurring bottlenecks or inefficiencies and work with their teams to improve the relevant processes. They also make it possible to test the efficacy of process changes. ... Specific metrics provided by SEI platforms allow engineering leaders to assess the quality of their team’s work, evaluate code review practices, and maintain stability and efficiency in software delivery. Visualizations of trends, patterns, and correlations from these metrics offer valuable insights to engineering leaders, leading to informed decision-making.


Shifting data protection regulations show why businesses must put privacy at their core

It isn’t just legislators pressuring businesses to take their data privacy responsibilities seriously. Public awareness of how data is collected, utilized and shared is on the rise, affecting consumer behavior accordingly. Publicity around the EU General Data Protection Regulation (GDPR) played a very important role in educating consumers in the UK about data privacy, with 79% of UK consumers saying that transparency about how their data is collected and used is important to them. But they also recognize the value of their data, with 61% of UK consumers viewing their personal information as an asset that can be used to negotiate better prices and offers with companies. And there is growing evidence that US consumers are increasingly privacy aware. According to DataGrail’s Privacy Trends 2023 report, DSRs – privacy requests submitted by data subjects to access or modify the data a company holds on them – grew by 72% year-on-year between 2021 and 2022. Of these requests, 52% came from people living in states without enacted privacy laws.


Hiring sentiment seems positive for Q4 after witnessing sluggishness in Q3

Consumer and retail companies will see a resurgence in Q4 from muted demand in semi-urban and rural areas in the festive season in Q3. While the report carries positive sentiment for the financial services sector, we would observe cautious moves from banks, NBFCs and Fintechs with increased regulatory pressure from the RBI on lending norms for riskier credits. According to the report findings, H2 is projecting positive incremental hiring, including workforce expansion, new hiring, and replacement hiring. This surge in workforce expansion can be attributed to government policies and initiatives aimed at fortifying employment opportunities and cultivating a business-friendly environment. Notably, India experienced a remarkable 7.8% surge in GDP during the first quarter of the fiscal year 2023-24 (Q1 FY24). This robust GDP growth underscores a potent economic rebound, driving the acceleration in incremental hiring across the nation. The report dives into the multifaceted factors that influence employment in India. According to the data, economic conditions significantly impact the employment environment, as cited by 69% of respondents.


Is the UK-US data bridge doomed to fail?

While experts agree that improvements have been made compared to previous efforts, concerns about the legislation remain. The Open Rights Group has argued that the data breach will “betray UK democratic values, and position the UK as a data-laundering heaven pushing for a global privacy race to the bottom”. “This approach doesn’t only fail to provide a long-term, pragmatic solution to international data transfers, but would further the UK’s reputation as an ‘international rogue actor’ that recent UK Governments have advanced throughout the years,” writes Mariano delli Santi, a data protection expert at ORG. The ICO has also been quick to highlight specific areas that could pose risks to data subjects in the UK. The watchdog has raised concerns about certain terminology used and also recommends monitoring the implementation of the UK-US data bridge generally, to ensure it operates as intended. For example, the ICO points out that the UK-US data bridge does not name all the special category data defined in Article 9 of UK GDPR, such as biometric, genetic, criminal offense, or sexual orientation data.



Quote for the day:

'Leadership occurs any time you attempt to influence the thinking, development of beliefs of somebody else.'' -- Dr. Ken Blanchard