Daily Tech Digest - December 16, 2023

AI: A Catalyst for Gender Equality in the Workplace

The Equality and Human Rights Commission reports that 77% of mothers have encountered negative or possibly discriminatory experiences during pregnancy, maternity leave, or upon returning to work. The joy of impending motherhood is often tainted by biases, as expecting mothers face subtle exclusions from projects or career advancements. Maternity leave, intended as a sacred period for bonding, becomes tinged with anxiety as women grapple with the fear of being sidelined professionally and the pressure to resume duties prematurely. Returning to the workplace brings feelings of inadequacy and frustration, met with insufficient support for balancing work and family responsibilities. These experiences, rife with frustration and disappointment, mark a daunting struggle for women seeking to re-establish themselves professionally post-maternity leave. However, despite these challenges, women actively choose to re-enter the workforce, embarking on the second phase of their careers post-sabbatical. Addressing these issues requires normative frameworks that ethically tackle the consequences of AI usage.


How to Identify and Address the Challenges of Excessive Business Growth

In other words, when processes start breaking down, and you find yourself constantly in reactive, catch-up mode, it's a sign you need more capacity. The tipping point will vary for each company, but if productivity and quality take a nosedive, growth has become excessive for your present resources. Other red flags include: Customer complaints spike; Employees seem stressed, burned out; You're always scrambling to meet deadlines; Infrastructure creaks under the weight - think cyberattacks, IT failures, supply chain issues; No time for strategy, only tackling emergencies; Costs rising faster than revenue; Profitability declines. Essentially, if growth starts hurting rather than helping, it's time for a change. ... Trying to manage a 100-person company like a 10-person startup will lead to chaos. But running a 10-person shop like a rigid 100-person bureaucracy will cause frustration. Align your leadership style, organizational structure, systems, and talent to your current size and growth needs.


AI Pushes Universities to Modernize IT Infrastructure

The convenience and accessibility of those technologies have created new demands for higher-quality and customizable learning experiences in higher education. According to data from McKinsey, 60% of students report that classroom learning technologies such as generative AI, machine learning and supercomputing have improved their learning and grades since COVID-19 began. In addition to using AI in classrooms, institutions can implement AI solutions in their IT decision-making to create a reliable, secure data infrastructure. As AI becomes more mainstream in higher education operations, universities can better understand, invest and apply AI-specific solutions to their IT needs. While investing in AI and the technology to support it, universities can improve operations, offering faster innovation and better student, faculty and researcher experiences. ... With demand for advanced technological offerings at universities becoming commonplace, IT teams face new challenges under small bud/gets. Many require modern IT infrastructure to support increasingly large datasets required for groundbreaking insights from research teams.


Future-proofing the digital rupee

Several factors contributed to the inception of India's CBDC. The global competition for CBDC development, coupled with the enthusiasm among nations to embrace digital solutions, played a pivotal role. The introduction of India's CBDC, the digital rupee, might have been influenced, at least partially, by the rising prevalence of cryptocurrencies, especially stablecoins. The Deputy Governor of the Reserve Bank of India (RBI) emphasised the need for caution in permitting such instruments. While stablecoins offer certain advantages, their applicability is confined to a limited number of developed countries. The success of UPI in India has raised questions about the necessity of deploying CBDCs in the country, perhaps making it look like an inconspicuous addition to an already largely developed payments landscape. The RBI Deputy Governor cited the ascent of cryptocurrencies and concerns about policy sovereignty as one of the reasons for considering CBDCs, along with improving digital transactions. However, India presents a unique case with the well-established UPI system already in place.


How to lock down backup infrastructure

The first thing to do is to protect the privileged accounts in your backup system. First, separate these accounts from any centralized login system you use, such as Active Directory, because these systems are sometimes compromised. Create as much of a firewall between that production system and the backup system as possible. And, of course, use a safe password, and do not use any passwords for these accounts that are used anywhere else. (Personally I would use a password manager to support having a different password everywhere.) Finally, make sure that any such logins are protected by multi-factor authentication, and use the best option available. Avoid the use of email or SMS-based MFA, as it is easily foiled by an experienced hacker. Try to use an OTP-based system of some kind, such as Google Authenticator, Symantec VIP, or Yubikey. Also investigate if your backup system has enhanced authentication for dangerous actions, such as deletion of backups before their scheduled expiration, or restoration of any data to anywhere other than where it was originally created. The first is used to easy delete backups from your backup system, without setting off any alarms, and the second is used to exfiltrate data by restoring it to a system the hacker controls.


Fortifying cyber defenses: A proactive approach to ransomware resilience

Instead of investing time in formulating non-binding pledges rather than working on actionable solutions, the US Government should adopt a more proactive stance by directly procuring advanced cybersecurity tools. These tools, which have been developed to keep data safe and stop ransomware attacks, exist and are continually evolving. By spearheading the implementation, through investment and education, the government can set a powerful example for the private sector to follow, thereby reinforcing the nation’s cyber infrastructure. The effectiveness of such tools is not hypothetical: they have been tested and proven in various cybersecurity battlegrounds. They range from advanced threat detection systems that use artificial intelligence to identify potential threats before they strike, to automated response solutions that can protect data on infected systems and networks, preventing the lateral spread of ransomware. Investing in these tools would not only enhance the government’s defensive capabilities but would also stimulate the cybersecurity industry, encouraging innovation and development of even more effective defenses.


Cloud squatting: How attackers can use deleted cloud assets against you

The risk from cloud squatting issues can even be inherited from third-party software components. In June, researchers from Checkmarx warned that attackers are scanning npm packages for references to S3 buckets. If they find a bucket that no longer exists, they register it. In many cases the developers of those packages chose to use an S3 bucket to store pre-compiled binary files that are downloaded and executed during the package’s installation. So, if attackers re-register the abandoned buckets, they can perform remote code execution on the systems of the users trusting the affected npm package because they can host their own malicious binaries. ... The attack surface is very large, but organizations need to start somewhere and the sooner the better. The IP reuse and DNS scenario seems to be the most widespread and can be mitigated in several ways: by using reserved IP addresses from a cloud provider which means they won’t be released back into the shared pool until the organization explicitly releases them, by transferring their own IP addresses to the cloud, by using private (internal) IP addresses between services when users don’t need to directly access those servers, or by using IPv6 addresses if offered by the cloud provider because their number is so large that they’re unlikely to ever be reused.


Data Leaders Say ‘AI Paralysis’ Stifling Adoption: Study

While AI is not new in the data industry, the public’s fascination with generative AI has fueled a veritable gold rush for industries to adopt the emerging technologies for a competitive advantage. But the lack of safety guidelines and organizational framework and training may be suffocating AI adoption efforts, according to the report. ... “What happened is everybody got ahold of the GenAI hammer, and now everything looks like a nail,” she says, adding that CIOs and CDOs must do their best to articulate the technical needs to non-technical members of the C-suite. “I do think there’s a disconnect between the CIO and CDO and the chief executive. We should not, in the data and technology space, expect people to understand the layer of complexity that we have to deal with. What we should be doing is taking that complexity and creating a story and a narrative, so it makes sense to the other people in our organization and businesses we work with.” The report also showed that data governance has stalled just as AI is being adopted across industries.


Artificial Intelligence Governance & Alignment with Enterprise Governance

The Objectives of the AI Governance are: Ensure enterprise is adopted pre-trained foundation models and complied; Guide the decision-making process to maintain AI Solution coherence; Maintain the relevancy of the enterprise to meet changing requirements ... The AI Governance Framework helps Enterprise to Manage, Govern, Monitor, and Adopt AI activities, practices, and systems across enterprise. AI Governance Framework defines a set of metrics that can be used to measure the success of the framework implementation. ... Establish an executive team for identifying and overseeing the AI initiatives across the enterprise. Define a clear vision and strategy for AI implementation aligned with the enterprise goals and business functions. Develop practical communications to, and appropriate access for employees. Setup AI Governance across enterprise. Define roles and responsibilities of individuals involved in AI development, deployment and monitoring. Foster the collaboration between AI experts, domain experts and business stakeholders. Establish a centralized, cross-functional team to review and update AI governance practices as technology, regulations, and enterprise needs.


Role of digital in risk management and compliances

Embracing risks is crucial for survival, as risks are inherent in every aspect of business, whether financial or non-financial. As Mark Zuckerberg says, “The only strategy that is guaranteed to fail is not taking risks.” However, this leads to a fundamental question: should businesses pursue risks solely in pursuit of higher returns? Going beyond the pursuit of returns alone, businesses in today’s context should focus on Return of capital and not just Return on capital. Business is about taking calculated risks and managing risks to achieve business goals. Risk exposures must be strategically crafted, with a comprehensive risk management framework in place. We piloted technology-enabled compliance way back in 2015, starting with an India-centric compliance tool that has now been implemented across the global organisation. The tool aids informed decision-making and swift response to emerging risks. The digital solution facilitates seamless communication and collaboration between dispersed teams, ensuring a coordinated approach to risk management. 



Quote for the day:

"Your job gives you authority. Your behavior gives you respect." -- Irwin Federman

Daily Tech Digest - December 14, 2023

Moral Machines: The Importance of Ethics in Generative AI

A transparent model can provide better functionality than an opaque model, as it provides users with explanations for its outputs. An opaque model does not need to explain its reasoning process, which introduces risk and potential liability if unexpected or inaccurate results are provided by a generative AI tool. This lack of visibility also makes opaque models more difficult to test than their transparent counterparts. As such, it’s important to consider generative AI tools with high transparency when working to build ethical systems. Explainability of AI models is another important aspect of creating ethical systems yet challenging to control. AI models, specifically deep learning models, use thousands upon thousands of parameters when creating an output. This type of process can be nearly impossible to track from beginning to end, which limits user visibility. Lack of explainability has already been demonstrated in real-world problems; we’ve seen many examples of AI hallucinations, such as the Bard chatbot error in February 2023, which occurs when a model provides an output that is entirely false or implausible.


12 Software Architecture Pitfalls and How to Avoid Them

Reusing an existing architecture is seldom successful unless the QARs for the new architecture match the ones that an existing architecture was designed to meet. Past performance is no guarantee of future success! Reusing part of an existing application to implement an MVP rapidly may constrain its associated MVA by including legacy technologies in its design. Extending existing components in order to reuse them may complicate their design and make their maintenance more difficult and therefore more expensive. ... Architectural work is problem-solving, with the additional skill of being able to make trade-offs informed by experience in solving particular kinds of problems. Developers who have not had experience solving architectural problems will learn, but they will make a lot of mistakes before they do. ... While new technologies offer interesting capabilities, they always come with trade-offs and unintended side effects. The new technologies don’t fundamentally or magically make meeting QARs unimportant or trivial; in many cases the ability of new technologies to meet QARs is completely unknown.


CIOs weigh the new economics and risks of cloud lock-in

“It is true that hyperscale cloud providers have hit such a critical mass that they create their own gravitational pull,” he says. “Once you adopt their cloud platforms, it can be difficult and expensive to migrate out. [But] CIOs today have more choice in cloud providers than ever. It is no longer a decision between AWS and Azure. Google has been successfully executing a strategy to attract more enterprise customers. Even Oracle has made the transition from focusing on in-house technology to become a full-service cloud provider.” CIOs may consider other approaches, McCarthy adds, such as selecting a single-tenant cloud solution offered by HPE or Dell, which bundle hardware and software in an as-a-service business model that gives CIOs more cloud options. “Another alternative includes colocation companies like Equinix, which has been offering bare-metal IaaS for several years and has now created a partnership with VMware to extend those services higher up the stack,” he says, adding that CIOs should not view a cloud provider “as a location but rather as an operating model that can be deployed in service provider data centers, on-premise, or at the edge.”


Understanding the True Cost of a Data Breach in 2023

Data breaches are common in the modern world, which means even if your organization hasn’t suffered one, the chances of it happening aren’t negligible. Criminal groups stand to profit significantly from these actions, so they are innovative and invest time and money to conduct highly advanced attacks. This means that a data breach doesn’t simply appear one second and then disappear the next. An IBM report noted the average breach cycle lasts for 287 days, with businesses taking 212 days to detect it and an additional 75 to neutralize the threat. Every organization should implement preventative measures to combat threat actors. This means building and exercising safe practices, like storing information securely, adhering to clear policies and training staff to understand data protection. Ultimately, the longer a breach continues, the more expensive it becomes. The Cost of a Data Breach Report 2023 found that companies that contain a breach within 30 days save over $1 million in contrast to those that take longer, so it pays to have a strong recovery process in place.


Fortifying confidential computing in Microsoft Azure

Adding GPU support to confidential VMs is a big change, as it expands the available compute capabilities. Microsoft’s implementation is based on Nvidia H100 GPUs, which are commonly used to train, tune, and run various AI models including computer vision and language processing. The confidential VMs allow you to use private information as a training set, for example training a product evaluation model on prototype components before a public unveiling, or working with medical data, training a diagnostic tool on X-ray or other medical imagery. Instead of embedding a GPU in a VM, and then encrypting the whole VM, Azure keeps the encrypted GPU separate from your confidential computing instance, using encrypted messaging to link the two. Both operate in their own trusted execution environments (TEE), ensuring that your data remains secure. Conceptually this is no different from using an external GPU over Thunderbolt or another PCI bus. Microsoft can allocate GPU resources as needed, with the GPU TEE ensuring that its dedicated memory and configuration are secured.


From reactive to proactive: Always-ready CFD data center analysis

By synchronizing with these toolsets, digital twin models can pull all relevant, necessary data and update accordingly. The data includes objects on the floor plan, assets in the racks, power chain connections, historical power, and environmental readings, and perforated tile and return grate locations. Therefore, the digital twin model is always ready to run the next predictive scenario with current data and minimal supervision from the operational team. As part of the routine output from the software, DataCenter Digital Twin produces Excel-ready reports, capacity dashboards, CFD reports, and go/no-go planning analysis. Teams can then use this information to evaluate future capacity plans, conduct sensitivity studies (such as redundant failure or transient power failure), and run energy optimization studies as needed. Much of this functionality is available through an intuitive and accessible web portal. We know that every organization has a unique set of problems, priorities, and workflows. As such, we’ve split DataCenter Insight Platform into two offerings – DataCenter Asset Twin and DataCenter Digital Twin.


AI-Powered Encryption: A New Era in Cybersecurity

AI-powered encryption represents a groundbreaking advancement in cybersecurity, leveraging the capabilities of artificial intelligence to strengthen data protection. At its core, AI-powered encryption utilizes machine learning algorithms to continuously analyze and adapt to new cyber threats, making it an incredibly dynamic and proactive defense mechanism. By employing AI-driven pattern recognition and predictive analytics, this encryption method can rapidly identify potential vulnerabilities and create tailored encryption protocols to thwart would-be attackers. One key aspect of AI-powered encryption is its ability to autonomously adjust security parameters in real-time based on evolving risk factors. This adaptability ensures that data remains secure even as cyber threats become more sophisticated. Moreover, the integration of AI enables encryption systems to swiftly detect anomalies or suspicious activities within the network, providing an extra layer of defense against unauthorized access or data breaches. 


7 Best Practices for Developers Getting Started with GenAI

Experiment (and encourage your team to experiment) with GenAI tools and code-gen solutions, such as GitHub Copilot, which integrates with every popular IDE and acts as a pair programmer. Copilot offers programmers suggestions, helps troubleshoot code and generates entire functions, making it faster and easier to learn and adapt to GenAI. A word of warning when you first use these off-the-shelf tools: Be wary of using proprietary or sensitive company data, even when just feeding the tool a prompt. Gen AI vendors may store and use your data for use in future training runs, a major no-no for your company’s data policy and info-security protocol. ... One of the first steps to deploying GenAI well is to master writing prompts, which is both an art and a science. While prompt engineer is an actual job description, it’s also a good moniker for anyone looking to improve their use of AI. A good prompt engineer knows how to develop, refine and optimize text prompts to get the best results and improve the overall AI system performance. Prompt engineering doesn’t require a particular degree or background, but those doing it need to be skilled at explaining things well. 


Could Your Organization Benefit from Hyperautomation?

Building a sophisticated hyperautomation ecosystem requires a significant technology investment, Manders says. “Additionally, the integration of multiple technologies and tools, inherent in hyperautomation, can usher in increased complexity, making ecosystem maintenance a challenging endeavor.” Failing to establish clear goal and governance guidelines can also create serious challenges. Automation without governance could lead individual departments to create their own automation processes, which may conflict with other departments’ processes. The resulting hyperautomation silos could lead to some departments failing to take advantage of solutions fellow departments have already deployed. Additionally, every time an organization transports data to another process or platform, there’s the risk of data leaks. “If we don’t follow best practices and ensure that data is secure, this information could fall into the wrong hands,” Rahn warns. Hyperautomation may also lead adopters to dependency on a particular vendor’s ecosystem of tools and technologies. 


How insurtech is using artificial intelligence

As insurers look to become more customer centric, the coupling of AI with advanced analytics can help provide a more specific, personalised and real-time picture of insurance customers. With insurance customers coming to rely on online platforms for purchasing and managing their policies for such a particular commodity, interactions with the firms themselves are few and far and between, which can water down the user experience. However, experience orchestration — the leveraging of customer data and AI by insurance companies to create highly personalised interactions — can be implemented to improve relations long-term. Manan Sagar, global head of insurance at Genesys, explains ... This approach not only improves the customer experience but also enhances employee efficiency by automating tasks or routing calls more effectively. “As the insurance industry navigates the digital age, experience orchestration can serve as a powerful tool to uphold the tradition of trust and personal relationships that have long defined the industry. Through this, firms can differentiate themselves in an increasingly commoditised market and ensure their customers remain loyal and satisfied.”



Quote for the day:

"A true leader always keeps an element of surprise up his sleeve which others cannot grasp but which keeps his public excited and breathless." -- Charles de Gaulle

Daily Tech Digest - December 13, 2023

The tide comes in for subsea cable networks

Our subsea networks are a victim of the problem, but they are also a contributor - as is every industrialized sector. Nicole Starosielski, author of The Undersea Network and subsea cable lead principal investigator for Sustainable Subsea Networks, openly criticized the less sustainable aspects of subsea cables, while acknowledging the difficulty that Sustainable Subsea Networks has had in actually quantifying the sector. “It’s a difficult process, generating a carbon footprint of the [subsea cable networks] industry. Unlike a data center which has four walls where you can draw your boundary, the cable industry is comprised of so many elements - from the landing station to cable annexation,” said Starosielski. “There are all these other pieces that the industry is trying to account for. One is obviously a marine aspect. You have a fleet of ships that are older, and there's not a lot of overhead and margin in the supply side of the marine sector. Google has money to build cables, but you don't see SubCom, ASN, and NEC running around with a lot of extra cash to build new ships.”


Five Things for Risk Professionals to Put on Their 2024 To-Do List

Organizations face a critical question: how can they stay ahead of unforeseen challenges? This requires understanding and adapting to emerging risks—like those new, evolving threats that arise from disruptive technology and changing regulatory landscapes. So, let’s consider this scenario: a technology firm faces a sudden regulatory change, impacting its operations. ... This is where organizational resilience becomes pivotal, transforming challenges into opportunities. But how do risk professionals identify emerging risks, particularly those associated with disruptive technologies? This lies in fostering a mindset that emphasizes continuous learning and constant monitoring of risks. This approach is complemented by innovative methods such as agile risk assessments and scenario analysis. Moreover, ISACA plays an instrumental role by providing access to a global network of expertise, supporting the risk community with dialogue about technology-focused risk analysis, digital literacy and understanding of the ethical and regulatory aspects of new technologies.


How C-Level Executives Can Increase Cyber Resilience

First things first: To secure your organization’s C-suite, start by putting basic security measures in place. All executive accounts should be secured using multifactor authentication (MFA). Avoid relying on SMS, as it can be compromised more easily than other options. Second, a thorough audit is crucial to determine what access privileges the CEO and other executive officers currently have. Given the unpredictable demands on their time, senior executives might have been granted access to key systems outside of predefined time windows. However, this added flexibility comes with risks. Any access that senior executives have to new products or proprietary information should be on a temporary basis to eliminate the potential for long-term vulnerabilities. It is also vital to implement robust monitoring, logging and alerting to oversee such access and ensure it is used legitimately. Third, the least privilege approach should also apply to senior executives. For example, C-level executives are more concerned about overall sales trends than the details around each deal, so there is generally no need for them to have write or modify permissions for the CRM or other critical databases.


The intersection of telehealth and AI: How can they reinforce each other?

AI tools help streamline the triage process, making it more user-friendly as well. It begins by collecting basic information like demographics and risk factors, followed by inquiries about the patient's primary symptoms, covering a wide age range from newborns to adults. ... Currently, AI tools are not authorized to diagnose patients. Despite the remarkable progress in generative AI, we must remain cautious about their practical application in healthcare. Our blood pressure cuffs are certified medical devices, and it's noteworthy that while AI tools possess significant capabilities, they are not subject to the same regulatory rigor. It's critical to establish a robust regulatory framework to guide and set standards for AI-assisted diagnosis in the future. This includes addressing key challenges like ensuring maximum transparency in AI decision-making processes and tackling issues related to bias and inaccuracies. I believe the ideal path forward is to position AI tools as optimal supporters for both patients and healthcare providers.


How to Effectively Draft Data Processing Agreements to Protect Information Shared with Service Providers

Privacy laws may impose notice requirements, remediation obligations and penalties on data controllers for privacy breaches. Thus, establishing clear sets of obligations for data processors in the case of a security breach can allow data controllers to meet their own legal obligations. Data controllers should expand the DPA provisions for security breach obligations to include any security incident or misuse of the data by the data processor or its personnel. This obligation should include both real and suspected incidents as this allows for mitigation efforts to be deployed early on by the data controller rather than waiting for a confirmation of a security incident, which can take several weeks depending on the complexity of the required forensic investigation. Data controllers should include security control provisions in the DPA setting out the steps to be taken by a data processor to secure sensitive data and respond to data incidents. Depending on the nature and sensitivity of the data, data controllers may lay out more specific steps to be taken before or after a security incident. 


EU’s AI Act: Europe’s New Rules for Artificial Intelligence

Developers of AI systems deemed to be high risk will have to meet certain obligations set by European lawmakers, including mandatory assessment of how their AI systems might impact the fundamental rights of citizens. This applies to the insurance and banking sectors, as well as any AI systems with “significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law.” AI models that are considered high-impact and pose a systemic risk – meaning they could cause widespread problems if things go wrong – must follow more stringent rules. Developers of these systems will be required to perform evaluations of their models, as well as “assess and mitigate systemic risks, conduct adversarial testing, report to the (European) Commission on serious incidents, ensure cybersecurity and report on their energy efficiency.” Additionally, European citizens will have a right to launch complaints and receive explanations about decisions made by high-risk AI systems that impact their rights. To support European startups in creating their own AI models, the AI Act also promotes regulatory sandboxes and real-world-testing. 
SEI platforms empower managers to gain real-time insights into their team’s progress, eliminating unnecessary meetings and constant check-ins. By breaking down silos and providing a clear view of everyone’s workload, SEI platforms foster greater team autonomy, allowing them to receive assistance when needed so they can operate more efficiently. ... Even in highly efficient and high-performing organizations, some projects may experience delays or budget overruns, and it can be hard to understand and communicate why. SEI platforms can help leaders spot recurring bottlenecks or inefficiencies and work with their teams to improve the relevant processes. They also make it possible to test the efficacy of process changes. ... Specific metrics provided by SEI platforms allow engineering leaders to assess the quality of their team’s work, evaluate code review practices, and maintain stability and efficiency in software delivery. Visualizations of trends, patterns, and correlations from these metrics offer valuable insights to engineering leaders, leading to informed decision-making.


Shifting data protection regulations show why businesses must put privacy at their core

It isn’t just legislators pressuring businesses to take their data privacy responsibilities seriously. Public awareness of how data is collected, utilized and shared is on the rise, affecting consumer behavior accordingly. Publicity around the EU General Data Protection Regulation (GDPR) played a very important role in educating consumers in the UK about data privacy, with 79% of UK consumers saying that transparency about how their data is collected and used is important to them. But they also recognize the value of their data, with 61% of UK consumers viewing their personal information as an asset that can be used to negotiate better prices and offers with companies. And there is growing evidence that US consumers are increasingly privacy aware. According to DataGrail’s Privacy Trends 2023 report, DSRs – privacy requests submitted by data subjects to access or modify the data a company holds on them – grew by 72% year-on-year between 2021 and 2022. Of these requests, 52% came from people living in states without enacted privacy laws.


Hiring sentiment seems positive for Q4 after witnessing sluggishness in Q3

Consumer and retail companies will see a resurgence in Q4 from muted demand in semi-urban and rural areas in the festive season in Q3. While the report carries positive sentiment for the financial services sector, we would observe cautious moves from banks, NBFCs and Fintechs with increased regulatory pressure from the RBI on lending norms for riskier credits. According to the report findings, H2 is projecting positive incremental hiring, including workforce expansion, new hiring, and replacement hiring. This surge in workforce expansion can be attributed to government policies and initiatives aimed at fortifying employment opportunities and cultivating a business-friendly environment. Notably, India experienced a remarkable 7.8% surge in GDP during the first quarter of the fiscal year 2023-24 (Q1 FY24). This robust GDP growth underscores a potent economic rebound, driving the acceleration in incremental hiring across the nation. The report dives into the multifaceted factors that influence employment in India. According to the data, economic conditions significantly impact the employment environment, as cited by 69% of respondents.


Is the UK-US data bridge doomed to fail?

While experts agree that improvements have been made compared to previous efforts, concerns about the legislation remain. The Open Rights Group has argued that the data breach will “betray UK democratic values, and position the UK as a data-laundering heaven pushing for a global privacy race to the bottom”. “This approach doesn’t only fail to provide a long-term, pragmatic solution to international data transfers, but would further the UK’s reputation as an ‘international rogue actor’ that recent UK Governments have advanced throughout the years,” writes Mariano delli Santi, a data protection expert at ORG. The ICO has also been quick to highlight specific areas that could pose risks to data subjects in the UK. The watchdog has raised concerns about certain terminology used and also recommends monitoring the implementation of the UK-US data bridge generally, to ensure it operates as intended. For example, the ICO points out that the UK-US data bridge does not name all the special category data defined in Article 9 of UK GDPR, such as biometric, genetic, criminal offense, or sexual orientation data.



Quote for the day:

'Leadership occurs any time you attempt to influence the thinking, development of beliefs of somebody else.'' -- Dr. Ken Blanchard

Daily Tech Digest - December 12, 2023

The SEC action against SolarWinds highlights how tough it can get for CISOs

The SEC has accused Brown of misleading investors by not disclosing "known risks" and not accurately representing the company’s cybersecurity measures during and before the 2020 Sunburst cyberattack that affected thousands of customers in government agencies and companies globally. ... The claimed failures, including not abiding by the statements that the company made on its website regarding their patterns and practices for developing their software as well as password policies internal to the company. The SEC complains in its filings that the company did not disclose cybersecurity risks independently from other risks, given SolarWinds' role and industry, nor pay extra risk attention to targeted attacks and the disclosure needs surrounding them. ... The SEC also indicated that SolarWinds did not limit administrative access to those who needed access. Too often in developer networks administrative rights are used too widely and not limited. Internal staff expressed concerns that user access would lead to losses of organizational assets and personal data.


Deriving Actionable Insights and ROI from AI

To increase the ROI of AI, large language models (LLMs) must ingest clean and high-quality data for accurate, meaningful insights. This is only possible by investing in data discovery and data classification solutions and processes. Organizations will also face growing AI-related security challenges in 2024. This will lead them to set up guardrails that protect corporate and customer data. Businesses must also consider that company-specific or proprietary data ingested by LLMs could put organizations at risk if company financials or other private information are replicated to a public AI engine and exposed. ... There are many opportunities for businesses to benefit from AI; however, there also needs to be a rapid evolution of data classification and data life cycle management before businesses will be able to derive the value they expect from AI. This is especially important if companies are trying to justify ROI from their AI investment. Sustainability took a back seat during the pandemic and long after the worst of it passed, as organizations made major adjustments to operations and tried to find their new (or old) normal.


Startup Oxide Computer puts the ‘cloud’ back in on-prem private clouds

Oxide's main mission is to put the "cloud" back in private cloud computing. The company is built on the premise that you should be able to choose to rent or own cloud capacity, depending on the workload, not losing the benefits of cloud computing like elasticity when you choose the latter. To accomplish this, the Oxide team set out to build an entirely new cloud hardware rack that would deliver all of the advantages public cloud vendors enjoy, without sacrificing on control, efficiency, and flexibility. Another issue that limits private clouds is that many enterprises attempt to build their private clouds on Kubernetes. The problem is that Kubernetes was not designed for multitenancy, and, thus, it does not offer a true cloud experience. That's not a knock on Kubernetes, but the container orchestration software is typically deployed on top of bloated layers of software, adding complexity and making it difficult to manage at scale. ... According to Oxide, this design allows enterprises to be fully deployed within a few hours of unboxing the system, versus what typically takes weeks or months using the "kit car" build of OEM hardware.

We’re in a truly important phase of change due to the impact of artificial intelligence. In fact, I believe people have been quite amazed at how good it is. Even industry professionals have been quite surprised at how powerful it is. But it comes with dangers, and I think that’s the important point I talked about a few years ago and still find very important today; you really need to understand how this works to use it effectively, because you still have to understand that it works statistically, in the sense that it understands what the most probable words are to follow the paragraphs it has already seen. ... I think we will have to ask the question of whether we are developing a new species, whether this is an evolution of what we are doing, or whether we are going to have to consider a new hybrid species, which is probably the perspective of integrating artificial intelligence into our own species. Elon Musk is considering the idea with Neuralink. His response to the existential threat of artificial intelligence is that no, we must become it, we must integrate artificial intelligence and humans, which will generate new philosophical, social, and legal dilemmas in the future.


Quantum-Computing Approach Uses Single Molecules As Qubits For First Time

Some of the first demonstrations of the basic principles of quantum computing, in the late 1990s, used large numbers of molecules manipulated in a solution inside a nuclear magnetic resonance machine. Since then, researchers have developed a variety of other platforms for quantum computing, including superconducting circuits and individual ions held in a vacuum. Each of these objects is used as the fundamental unit of quantum information, or qubit — the quantum equivalent of the bits in classical computers. In the past few years, another strong contender has emerged, in which the qubits are made of neutral atoms — as opposed to ions — trapped with highly focused laser-beam ‘tweezers’. Now, two separate teams have made early progress towards using this approach with molecules instead of atoms. “Molecules have a bit more complexity, which means they offer new ways to both encode quantum information, and also new ways in which they can interact,” says Lawrence Cheuk


DevOps and Automation

Continuous Integration (CI) and Continuous Deployment (CD) are critical components of DevOps software development. CI is the practice of automating the integration of code changes from multiple contributors into a single software project. It is typically implemented in such a way that it triggers an automated build with testing, with the goals of quickly detecting and fixing bugs, improving software quality, and reducing release time. After the build stage, CD extends CI by automatically deploying all code changes to a testing and/or production environment. This means that, in addition to automated testing, the release process is also automated, allowing for a more efficient and streamlined path to delivering new features and updates to users. Docker and Kubernetes are frequently used to improve efficiency and consistency in CI/CD workflows. The code is first built into a Docker container, which is then pushed to a registry in the CI stage. During the CD stage, Kubernetes retrieves the Docker container from the registry and deploys it to the appropriate environment, whether testing, staging, or production. 


Unveiling the 'Willingness Pyramid' across generations and cities

The 'Willingness Pyramid' stands as a compelling representation of the shifting attitudes towards work models in bustling metropolises. At its core, it underscores a marked inclination towards the "only office" work model, with a notable emphasis on the younger, tech-savvy workforce in these urban hubs. For the emerging generations of Late millennials and Gen Z, the physical office environment is not merely a place of work but a dynamic space that fosters productivity, innovation, and collaboration. The younger workforce's enthusiastic embrace of the "only office" model is rooted in a confluence of factors. Raised in the digital age, these individuals have grown up with technology seamlessly integrated into their lives. As a result, they perceive the physical office as a hub for face-to-face interactions, spontaneous idea exchanges, and a breeding ground for innovation that cannot be fully replicated in remote settings. The office, for them, is not just a workplace but a social and creative nexus. This trend, however, exhibits a nuanced dynamic as we traverse through different age groups within the same urban landscapes.


Why FinOps Must Focus on Value, Not Just Cost

It’s not necessarily the fault of those teams. Rather, FinOps in its earliest iterations has suffered from some of the same problems that plagued its predecessors — namely, an approach to cost management that focuses on the “what” — how much the bill says you spent, and only after it arrives — versus the “why,” which should accurately reflect the business reasons for consuming cloud resources in the first place, as well as the results those choices produced. Moreover, FinOps, even while the name suggests tight collaboration, often still relies on fragmented and retroactive processes and information, according to Williams. ... Moving to a value-focused approach to FinOps is akin to the “shift left” mindset that is increasingly popular in security and other IT domains that have historically been dealt with via lagging indicators. Some organizations try mandating discipline with regard to cloud usage — the “do this or else” approach. “That never works,” Williams said. Instead, consumption policies and technical guardrails need to be implemented before resources are ever provisioned and deployed.


Researchers Unmask Sandman APT's Hidden Link to China-Based KEYPLUG Backdoor

"We did not observe concrete technical indicators confirming the involvement of a shared vendor or digital quartermaster in the case of LuaDream and KEYPLUG," Aleksandar Milenkoski, senior threat researcher at SentinelLabs, told The Hacker News. "However, given the observed indicators of shared development practices, and overlaps in functionalities and design, we do not exclude that possibility. Noteworthy is the prevalence of similar cases within the Chinese threat landscape, indicating there could be established internal and/or external channels for supplying malware to operational teams." "The order in which LuaDream and KEYPLUG evaluate the configured protocol among HTTP, TCP, WebSocket, and QUIC is the same: HTTP, TCP, WebSocket, and QUIC in that order," the researchers said. "The high-level execution flows of LuaDream and KEYPLUG are very similar." The adoption of Lua is another sign that threat actors, both nation-state aligned and cybercrime-focused, are increasingly setting their sights on uncommon programming languages like DLang and Nim to evade detection and persist in victim environments for extended periods of time.


The skills and traits of elite CTOs

Strategic thinking is essential for a CTO to align technology initiatives with the overall business strategy, Kowsari says. “A successful CTO should be able to set a clear technology vision, identify opportunities for innovation, and make informed decisions that drive the company’s growth,” he says. Without a strategic mindset, technology investments and initiatives might lack direction and fail to contribute to the organization’s success, Kowsari says. The ability to set and execute a vision is indeed a fundamental aspect of the CTO’s role, Clemenson says. “This encompasses aspects such as design, funding, efficient resource allocation, buy vs. build strategy, while simultaneously emphasizing short- and long-term considerations,” he says. At the same time, CTOs must be strong leaders. “CTOs are responsible for leading and managing technology teams,” Kowsari says. “Strong leadership and team-building skills are vital. Effective team management can lead to increased productivity, innovative solutions, and the successful execution of technology projects. It also helps retain top technical talent, which is essential in a competitive job market.”



Quote for the day:

"Don't be buffaloed by experts and elites. Experts often possess more data than judgement." -- Colin Powell

Daily Tech Digest - December 11, 2023

Enterprise Architecture – Supporting Resources on Demand

As the subscription economy grows, the market could become saturated with providers offering varying levels of service quality. Businesses should carefully evaluate their options, considering factors such as customer support, scalability, and the sophistication of available resources. The positive impact of selling EA as a subscription service, however, is clear. With more service providers offering cloud solutions, there is more competition for your business. You, as the business customer, have more options, which can lead to better services and pricing. Business customers of all sizes can get access to advanced technology and data storage capabilities through a subscription. This can open economic doors to developing nations, allowing business growth to more players who would otherwise not be able to participate in a digital transformation journey. This fosters a more inclusive and diverse tech landscape, where breakthroughs can emerge from unexpected corners of the business world. You can focus on growing your core business without the traditional burdens of upfront investment and the complexity of building and managing infrastructure from scratch.


Trends in Data Governance and Security: What to Prepare for in 2024

In 2023, many companies turned to do-it-yourself (DIY) data governance to manage their data. Yet, without seeking the help of data governance experts or professionals, this proved to be insufficient due to compliance gaps and the data security errors it leaves in its wake. While do-it-yourself data governance seemed like a cost-effective solution, it has serious consequences for companies leaving them exposed to data breaches and other security threats. This is because DIY data governance often lacks the comprehensive security protocols and expertise that professional data governance provides leading to both data breaches and other security threats. Worse, the approach often involves piecemeal solutions that do not integrate well with each other, creating security gaps and leaving data vulnerable to attack. As a result, DIY data governance may not be able to keep up with the constantly evolving data privacy landscape, including new regulations and compliance requirements. Companies that rely on do-it-yourself data governance are exposing themselves to significant risks and will see the repercussions of this in 2024. 


Generative AI is off to a rough start

One big problem, among several others that Duckbill Chief Economist Corey Quinn highlights, is that although AWS felt compelled to position Q as significantly more secure than competitors like ChatGPT, it’s not. I don’t know that it’s worse, but it doesn’t help AWS’ cause to position itself as better and then not actually be better. Quinn argues this comes from AWS going after the application space, an area in which it hasn’t traditionally demonstrated strength: “As soon as AWS attempts to move up the stack into the application space, the wheels fall off in major ways. It requires a competency that AWS does not have and has not built up since its inception.” Perhaps. But even if we accept that as true, the larger issue is that there’s so much pressure to deliver on the hype of AI that great companies like AWS may feel compelled to take shortcuts to get there (or to appear to get there). The same seems to be true of Google. The company has spent years doing impressive work with AI yet still felt compelled to take shortcuts with a demo. As Parmy Olson captures, “Google’s video made it look like you could show different things to Gemini Ultra in real time and talk to it. You can’t.”


CIOs grapple with the ethics of implementing AI

Even with a team focused on AI, identifying risks and understanding how the organization intends to use AI both internally and publicly is challenging, McIntosh says. Team members must also understand and address the inherent possibility of AI bias, erroneous claims, and incorrect results, he says. “Depending on the use cases, the reputation of your company and brand may be at stake, so it’s imperative that you plan for effective governance.” With that in mind, McIntosh says it’s critical that CIOs “don’t rush to the finish line.” Organizations must create a thorough plan and focus on developing a governance framework and AI policy before implementing and exposing the technology. Identifying appropriate stakeholders, such as legal, HR, compliance and privacy, and IT, is where Plexus started its ethical AI process, McIntosh says. “We then created a draft policy to outline the roles and responsibilities, scope, context, acceptable use guidelines, risk tolerance and management, and governance,” he says. “We continue to iterate and evolve our policy, but it is still in development. We intend to implement it in Q1 2024.”


Accenture takes an industrialized approach to safeguarding its cloud controls

Accenture developed a virtual cloud control factory to support five major, global cloud infrastructure providers and enable reliable inventory; consistent log and alert delivery to support security incident detection; and predictable, stable, and repeatable processes for certifying cloud services and releasing security controls. The factory features five virtual "departments". There's research and development, which performs service certification, control definition, selection, measurement, and continual re-evaluation; the production floor designs and builds control; quality assurance tests the controls; shipping and receiving integrates controls with compliance reporting tools; and customer service provides support to users after a control goes live. "What we decided to do was centralize that cloud control development, get all the needs into one place, start organizing them in a way that we could run them through a factory and get them out there so people can use common controls, common architecture that had a chance of keeping up with [our engineers'] innovation sitting on top of the [major cloud platforms'] innovation," Burkhardt says


Pressure on Marketers Will Drive Three Key Data Moves in 2024

Data clouds help achieve that goal. In both time and expense, organizations can no longer afford to jump between different systems to try to make sense of what a customer wants and formulate a real-time response in the moment of interaction. With a CDP sitting directly on top of a data cloud, it is easier and less expensive to build a unique customer profile and then activate that profile across multiple systems. Organizations recognize that first-party data is a valuable asset and is the foundation for delivering a personalized customer experience (CX), but for too long business users have been stymied by complex, unintegrated marketing stacks and time-consuming data transformations. That approach to making data actionable -- turning data into insight -- is no longer sustainable when customers expect real-time, personalized experiences that are consistent across channels. ... Moving to a data cloud and coupling it with a CDP’s automated data quality and identity resolution addresses these issues head-on, and that trend will continue -- particularly for customer-facing brands that see a data cloud with an enterprise-grade CDP as a relatively fast, inexpensive way to monetize their customer data.


Initial Agile Requirements and Architecture Modeling

Talk to most agilists, and particularly the purists, and they’ll claim that they don’t do any modeling up front. This of course is completely false, they just use different terminology such as “populate the backlog” rather than initial requirements modeling and “identify a runway” instead of initial architecture modeling. Sigh. Some of the more fervent agilists may even tell you about the evils of big modeling up front which is why they choose to eschew anything that smells like up-front thinking. ... The goal of initial architecture modeling on an agile team is to identify what the team believes to be a viable strategy for building the solution. Sufficiency is determined by your stakeholders – Can you exhibit an understanding of the existing environment, and the future direction of your organization, and show how your proposed strategy reflects that? Your initial architecture model should be JBGE in that it addresses, at a high-level, the business and technical landscapes that your solution will operate within. This modeling effort is often led, not dictated, by the architecture owner on your team.


Why are IT professionals not automating?

25% of participants highlighted cost and resource as potential obstacles. They wonder if they need to create a custom solution and, if so, whether it’s cost-effective or cheaper to continue with manual maintenance. They are also concerned about the resources required to maintain an automated solution. 20% admit that they and their teams lack the knowledge or expertise to choose an automated solution. They are not familiar with automation in general or the specific requirements of automating their systems. The survey results clearly indicate that many IT professionals are not familiar with or don’t see the value of certificate automation. Or is it that they didn’t think about it enough? After all, certificates have been part of our IT infrastructure for a very long time, while they are not exciting, they do work, so why fix something that is not broken? Unfortunately, when the 90-day Google edict eventually becomes reality, it will increase the need for renewal/replacement of SSL/TSL certificates by four times (4X) the current pace. IT professionals may be underestimating the burden that it will put on their teams. 


How Could AI Be a Tool for Workers?

The benefits for companies designing and using AI systems are vast and readily apparent. Tools that can complete work in a fraction of the time at a fraction of the cost are a boon for the bottom line. “The main beneficiaries of the technology are global technology giants primarily based in the United States,” says Michael Allen, CTO of enterprise content management company Laserfiche. He points out that these companies have the resources to accrue the massive amounts of data required to train AI models. Companies that adopt these powerful AI models can leverage them to cut costs. Allen points out that many companies will likely use AI to shift away from outsourcing. “A lot of firms outsource mostly routine clerical work to places like India, and I believe that's going to be threatened or impacted significantly by AI that will be able to do that work faster and cheaper,” he says. The way that AI devalues entry-level work is already being seen. Stephanie Bell is a senior research scientist at the nonprofit coalition Partnership on AI, which created guidelines to ensure AI economic benefits are shared. She offers examples in the digital freelance market. 


Bryan Cantrill on AI Doomerism: Intelligence Is Not Enough

Cantrill had titled his talk “Intelligence is not enough: the humanity of engineering.” Here the audience realizes they’re listening to the proud CTO of a company that just shipped its own dramatically redesigned server racks. “I want to focus on what it takes to actually do engineering… I actually do have a bunch of recent experience building something really big and really hard as an act of collective engineering…” Importantly, the common thread for these bugs was “emergent” properties — things not actually designed into the parts, but emerging when they’re all combined together. “For every single one of those, there is no piece of documentation. In fact, for several of those, the documentation was actively incorrect. The documentation would mislead you ... Cantrill put up a slide saying “Intelligence alone does not solve problems like this,” presenting his team at Oxide as possessed of something uniquely human. “Our ability to solve these problems had nothing to do with our collective intelligence as a team…” he tells his audience. “We had to summon the elements of our character. Not our intelligence — our resilience.”



Quote for the day:

“I'd rather be partly great than entirely useless.” -- Neal Shusterman

Daily Tech Digest - December 10, 2023

'Move Fast And Break Things' Doesn’t Apply To AI

Given the urgency around generative AI, those looking for a first-mover advantage or those fearing being left behind may be tempted to adopt the "move fast and break things" mantra. After all, it has long been a staple of Silicon Valley culture. But following it would be a mistake in this instance. ... Consider the analogy of building a house—you wouldn’t just start digging immediately. You need to lay the groundwork first. You need to be planning, consulting structural engineers, involving site visitors, commissioning architectural drawings and building control. It is all essential work that needs to be completed before a brick is laid. But once it has been, confidence in the build skyrockets because there is a clear procedure to follow. Going slower to go faster also applies to AI. Developing a strategy for AI requires deep expertise and a first-class analysis of organizational data. This involves getting a holistic view of the data within an organization, understanding which elements could have inherent bias and lead to the wrong insights and building a picture of the level of automation that could improve operational efficiency.


Generative AI as a copilot for finance and other sectors

While advanced AI technologies such as quantum computing and blockchain have long been a part of Moody's Analytics' IT arsenal, generative AI has spun off many complex models. That can be challenging for a company with large data sets that is concerned about data privacy and security, said Caroline Casey, general manager for customer experience and innovation at the company, in an interview. Before releasing its Research Assistant product on Dec. 4, Moody's created an internal copilot product -- not to be confused with Microsoft Copilot. Research Assistant is a search and analytical tool built on Azure OpenAI and uses OpenAI's GPT-4. "We know that the purpose of this is not to replace a human," Casey said. "It's to take out the kind of mundane work -- the trying to find information, the retrieval, the searching -- and actually help them to focus on where they've got the best expertise." Moody's began its journey in the summer after the CEO encouraged all employees of Moody's Corp. to be innovators. 


World’s First Cybersecurity & AI Guidelines: Experts Weigh in

Speaking to Techopedia, Nic Chavez, Field Chief Information Officer at DataStax, noted that one of the important takeaways is the cautious and collaborative approach employed by the UK to develop the guideline. “I think it’s important to recognize the caution and collaboration with which NCSC approached this endeavor. By seeking feedback from the international community, including other NATO nations, NCSC was able to triangulate recommendations that were reasonable, swiftly actionable and strong.” In his reaction, Jeff Schwartzentruber, Senior Machine Learning Scientist at eSentire and Industry Research Fellow at Toronto Metropolitan University, told Techopedia that releasing these AI guidelines is a step in the right direction as it will help to expand international cooperation and accelerate commitments on the regulation and appropriate use of AI technologies. “I see this as a positive step forward in terms of expanding the international cooperation and discourse on the regulation and appropriate use of AI technologies. ...”


How the blockchain industry can adopt cybersecurity

While the theoretical underpinnings of blockchain offer unparalleled security benefits, the practical implementation introduces potential vulnerabilities. One such vulnerability lies in the exchange of data between blocks, where cybercriminals can intercept and manipulate information. To fortify blockchain systems against such attacks, the adoption of advanced encryption measures becomes paramount. Just as Distributed Denial of Service (DDoS) attacks are thwarted in traditional systems, blockchain must implement robust encryption to safeguard data exchange between blocks. Another challenge in blockchain security arises from censorship attacks, where malicious validators intentionally disrupt or halt the blockchain protocol. Additionally, attackers may masquerade as validators, gaining trust within the system and executing Trojan attacks. To address these threats, it is essential to employ traditional cybersecurity strategies, including encryption, key management, and DNS hygiene. By integrating artificial intelligence (AI) into the system, organizations can enhance their ability to detect consensus attacks, particularly in Proof of Stake (PoS) validation methods.


SLAM Attack: New Spectre-based Vulnerability Impacts Intel, AMD, and Arm CPUs

The attack is an end-to-end exploit for Spectre based on a new feature in Intel CPUs called Linear Address Masking (LAM) as well as its analogous counterparts from AMD (called Upper Address Ignore or UAI) and Arm (called Top Byte Ignore or TBI). "SLAM exploits unmasked gadgets to let a userland process leak arbitrary ASCII kernel data," VUSec researchers said, adding it could be leveraged to leak the root password hash within minutes from kernel memory. While LAM is presented as a security feature, the study found that it ironically degrades security and "dramatically" increases the Spectre attack surface, resulting in a transient execution attack, which exploits speculative execution to extract sensitive data via a cache covert channel. ... AMD has also pointed to current Spectre v2 mitigations to address the SLAM exploit. Intel, on the other hand, intends to provide software guidance prior to the future release of Intel processors that support LAM. In the interim, Linux maintainers have developed patches to disable LAM by default. 


Taking a strategic view of telecom networks in Indo-Pacific

The Indo-Pacific is home to some of the world’s fastest-growing digital economies harnessing technology for national governance and economic development. Telecommunications connectivity – the internet and mobile penetration base – forms the backbone for these economies. Unsurprisingly, the telecom market in the region is witnessing an upgrade. By 2030, telecom companies are expected to invest US$259 billion in the development of networks in the region. These investments will foster the expansion of the digital economy and act as catalysts for innovation, growth and prosperity, with 5G playing an indispensable role in this. 5G represents a generational shift in wireless telecommunications – anchored on higher data transfer speed and ultra-low latency. It holds the promise of revolutionising how people communicate and consume content on the internet and transforming edtech, telemedicine, precision agriculture, and the Internet of Things. However, 5G technology is not cheap, and developing economies have faced budgetary constraints in deploying it.


How to stop digital twins from being used against you

Beyond device optimization and prolonged lifecycles, however, there’s a dark side of digital twins that warrants careful consideration and mitigation strategies. First and foremost, digital twins offer hackers another chance at sensitive company information, particularly when the device data is stored in plain text in the cloud. Providing these models with up-to-date data means providing sensitive information. This goes beyond mere device information, it can sometimes include the personally identifiable data of employees and customers. Meanwhile, the use of international servers to run digital twin operations further complicates things. Different jurisdictions count different privacy requirements, meaning that cross-border data exchanges to run these simulations can bring regulatory and compliance headaches. Additionally, the connected devices themselves can cause security issues. For example, IoT sensors sometimes operate on outdated and vulnerable operating systems. Additionally, cheap devices are well-known for default credentials and unencrypted communications, an important concern as more than two billion devices come online next year. 


The Role of Non-Executive Directors in Driving Innovation

The agile nature of startups grants them an advantage in driving disruptive innovation. They have a greater appetite for risk and tend to be nimbler than their established counterparts. Free from the shackles of middle management’s confining layers and quarterly reporting pressures, these small entities are often seen as the leaders of innovation. On the other side of the spectrum, large companies, despite having the funds to finance innovation, tend to exhibit risk avoidance to protect individual reputations and the status quo. But the acquisition of innovative companies can be a strategic move for larger corporations, provided the innovative culture of the smaller entity is preserved in the process. ... NEDs play an important role in balancing the need for funding innovation against potential impacts on existing business practices. But conflict often emerges between securing immediate profits for shareholders and investing in long-term growth fueled by innovation. However, there is evidence that companies built for the future—those that prioritize innovation—can generate shareholder returns almost three times greater than those of the broader market reflected in the S&P 1200.


Surviving The Polycrisis Of Technological Singularity

First, let us agree on what a technological singularity would look like. It is an idea that puts us in an era where predictability ceases to exist and the conventional understanding of technological evolution is of little use. Historically, we as a society have failed to predict the effects of technological evolutions. Humans have usually underestimated the effects of technological disruptions in the long term. The fusion of various revolutionary technologies in this era, such as quantum computing, nanotechnology, superconductivity and AI, will surely propel us into a zone of immense possibilities and daunting uncertainties that are hard to grasp, let alone predict. One could argue that, if we have this stage today, then we are in the nascent stages of technological singularity. The open letter that was written by leaders from various areas of society calling for a pause in AI development for six months is, to me, one clear signal of the beginning of technological singularity. We can slow down its pace, but we may not be able to stop it. At the core of this discussion lies these profound questions: Can humanity harness the potential of these technologies and mitigate the corresponding risks simultaneously?


DevOps Strategies for Connected Car Development

The connected car is a complex ecosystem of software systems. These vehicles have numerous systems that communicate with each other, the driver and the outside world. Managing the development of these systems can be a daunting task, and this is where DevOps strategies come in. DevOps aims to shorten the system development life cycle and provide continuous delivery with high software quality. This methodology is particularly suited to the complex software systems of connected cars, as it encourages a holistic view of the development process, ensuring that all components work together seamlessly. Moreover, DevOps helps to manage the complexity of car software systems by automating tasks, reducing errors and improving efficiency. The use of automated tools for configuration management, deployment and monitoring means less manual work, fewer mistakes, and quicker problem resolution. One of the greatest challenges in connected car development is the need for speed. In this fast-paced industry, companies are under pressure to develop and deploy new features quickly to stay competitive. 



Quote for the day:

"If you genuinely want something, don't wait for it--teach yourself to be impatient." -- Gurbaksh Chahal