Daily Tech Digest - January 05, 2024

The dark side of AI: Scientists say there’s a 5% chance of AI causing humans to go extinct

Despite concerns about AI behaving in ways misaligned with human values, some argue that current technology cannot cause the catastrophic consequences predicted by skeptics. Nir Eisikovits, a philosophy professor, contends that AI systems cannot make complex decisions and do not have autonomous access to critical infrastructure. While the fear of AI wiping out humanity grabs attention, an editorial in Nature contends that the more immediate societal concerns lie in biased decision-making, job displacement, and the misuse of facial recognition technology by authoritarian regimes. The editorial calls for a focus on actual risks and actions to address them rather than fearmongering narratives. The prospect of AI with human-level intelligence raises the theoretical possibility of AI systems creating other AI, leading to uncontrollable “superintelligence.” Authors Otto Barten and Joep Meindertsma argue that the competitive nature of AI labs incentivizes tech companies to create products rapidly, possibly neglecting ethical considerations and taking risks.


10 Skills Enterprise Architects Need In 2024

While an abundance of legacy technology is a cause for concern, each application needs to be appraised on a case-by-case basis. It's possible that an older application could actually be a better functional fit for your organization. More likely, however, is that removing a legacy application could be more trouble than it's worth. When you have clarity on how each application fits into your IT landscape, it could become apparent that removing an application would cause more problems than it would solve. Just as enterprise architects need to become experts at surgically removing outdated applications, they also need to know when the time is right to remove an application and how to manage legacy technology until that point. That's the true value of enterprise architecture. ... As generative artificial intelligence (AI) and other new technologies continue to take the weight of work out of daily tasks, the value a human can add is more about communication, negotiation, and diplomacy. Getting stakeholders on board with enterprise architecture involves charm and understanding.


The European Data Act: New Rules for a New Age

Being a key element of the EU’s data strategy, the Data Act intends to lead to new, innovative services and more competitive prices for aftermarket services. According to the European Commission, the Data Act will make more data available for reuse, and it is expected to create 270 billion euros of additional gross domestic product by 2028. Complementing the Data Governance Act, which sets out the processes and structures to facilitate data sharing by companies across the EU and between sectors, the Data Act clarifies who can create value from industrial data and under which conditions. The Data Act also aims to put users and providers of data processing services on more equal footing in terms of access to data. ... The Data Act includes specific measures to allow users to gain access to the data their connected products generate (including the relevant metadata necessary to interpret such data) and to share such data with third parties to provide aftermarket or other data-driven innovative services. The Data Act further sets out that such data should be accessible in an easy, secure, comprehensive and structured manner, and it should be free of charge and provided in a commonly used machine-readable format.


Unlocking the Potential of Gen AI in Cyber Risk Management

Security automation powered by AI plays a pivotal role in streamlining various security functions, alleviating the workload for CSOs and CIOs and facilitating regulatory compliance. Security automation significantly simplifies routine security tasks, allowing human resources to pivot toward more intricate risk analysis and strategic decision-making. One of the notable contributions of AI lies in its assistance in meticulous code inspection and vulnerability assessment. For instance, tools such as Amazon Inspector for Lambda code and Amazon Detective provide indispensable support. Amazon Inspector aids in the comprehensive examination of code, identifying potential vulnerabilities or security loopholes within the Lambda functions, which are integral parts of many cloud applications. This early identification ensures preemptive measures are taken to fortify these vulnerabilities before deployment. Additionally, Amazon Detective helps security analysts by correlating and organizing vast amounts of data to identify patterns or anomalies that might signify a security issue. By leveraging machine learning and AI-driven insights, it streamlines the process of identifying and addressing them effectively. 


Honeywell’s Journey to Autonomous Operations

We’ve integrated AI into our technical-support operations, enabling customers to receive answers to their technical questions within minutes or seconds, as opposed to the day or two it previously took. Today, the addition of generative AI has amplified the capabilities of industrial AI, making it even more powerful than ever before. For example, we’re currently looking at millions of instances of alarms being triggered in the plants of our industrial customers -- to evaluate the potential use of such historical datasets to train a robust language model that would assist plant operators in identifying and addressing alarm issues promptly and providing guidance on necessary actions. ... With the convergence of IoT and AI software, the journey to autonomous operations is accelerating rapidly in the industrial world. However, automated decision-making requires both domain knowledge and the technical capabilities to build such a system. In vetting potential partners, look for one with the experience, data, and domain expertise to help you make the transition at scale.


Data and AI Predictions for 2024: Shifting Sands in Enterprise Data and AI Technologies

As organizations continue their shift to cloud-based data and analytics infrastructure, a more prudent fiscal outlook will be the theme for 2024. The cloud migration megatrend will not reverse, but organizations will scrutinize their cloud spend more than ever due to the challenging macroeconomic environment. In the cloud analytics arena, Databricks and Snowflake will continue their dominance with their well-established platforms. In particular, Databricks’ first-mover advantage for facilitating a lakehouse architecture will allow it to capture more market share. This paradigm combines the flexibility of data lakes with the management features of data warehouses, offering the best of both worlds to enterprises. On the other hand, Google BigQuery is expected to retain its stronghold within Google Cloud Platform (GCP) deployments, bolstered by deep integration with other GCP services and a strong existing customer base. However, the economic headwinds will compel enterprises to consider the total cost of ownership more closely. As a result, the traditional data warehouse architecture will see a decline in favor of the more cost-effective lakehouse design pattern. 


“You can’t prevent the unpreventable” - Rubrik CEO

A significant hurdle in the fight against cyber threats as a whole is in legislation and prosecution. The most capable cyber criminal enterprises are often state-sponsored groups harbored within nations that share their sympathies. While it is possible to seize their cyber assets and disrupt their operations, it is near impossible to prosecute a criminal who is working on behalf of a hostile government. Sinha states that not enough is being done at both the business and governmental levels to create frameworks for information sharing. This means that when one business faces a successful attack, it can be studied to understand the methods of intrusion, how the data was encrypted or extracted, and what could have been done at each stage of the attack to minimize the damage. Not only does this allow businesses to improve their data security and recovery strategies, but also provides attack playbooks that can be used to identify the groups responsible and their cyber infrastructure. However, there is an air of hesitation among many businesses as many would prefer to pay a ransom rather than reveal that their organization was successfully breached, which could cause potential reputational and economic losses.


Gen AI: A Shield for Improved Cyber Resilience

Before implementing GenAI as a proper defense tool, teams and leaders need to understand the strengths and weaknesses of GenAI. Proper research and education on this topic will ensure accurate security procedures fortifying the appropriate tool for the corresponding task. An easy way to understand the benefits of a certain AI tool is by surveying its AI model card (sometimes known as a “system card”), which ultimately provides users with knowledge about its benefits and advantages, what it has and has not been tested for, and its flaws and vulnerabilities. Vetting AI models is a vital step, and model provenance should be the first step of any and all defense strategies. Biden’s latest executive order about AI reinforces the importance of vetting AI models, requiring all AI models to be red-teamed to suss out potential weaknesses. Model provenance provides all documented history such as the AI model origin, the architecture and parameters it possesses, dependencies it may bear, the data used to train it, and other corresponding details. 


Apache ERP Zero-Day Underscores Dangers of Incomplete Patches

The incident highlights attackers' strategy of scrutinizing any patches released for high-value vulnerabilities — efforts which often result in finding ways around software fixes, says Douglas McKee, executive director of threat research at SonicWall. "Once someone's done the hard work of saying, 'Oh, a vulnerability exists here,' now a whole bunch of researchers or threat actors can look at that one narrow spot, and you've kind of opened yourself up to a lot more scrutiny," he says. "You've drawn attention to that area of code, and if your patch isn't rock solid or something was missed, it's more likely to be found because you've extra eyes on it." ... The reasons that companies fail to fully patch an issue are numerous, from not understanding the root cause of the problem to dealing with huge backlogs of software vulnerabilities to prioritizing an immediate patch over a comprehensive fix, says Jared Semrau, a senior manager with Google Mandiant's vulnerability and exploitation group. "There is no simple, single answer as to why this happens," he says. 


Unlocking the Secrets of Data Privacy: Navigating the World of Data Anonymization, Part 1

Implementing data anonymization techniques presents many technical challenges that demand meticulous deliberation and expertise. One paramount obstacle lies in the intricacies of determining the optimal level of anonymization. A profound comprehension of the data's structure and the potential for re-identification is imperative when employing techniques such as k-anonymity, l-diversity, or differential privacy. Furthermore, scalability poses another formidable hurdle. With the continuous growth of data volumes, effectively applying anonymization techniques without unduly compromising performance becomes increasingly more work. Numerous difficulties emerge in the execution procedure because of the differing nature of information types, from organized information in databases to unstructured information in reports and pictures. Additionally, the challenge of keeping pace with the ever-evolving data formats and sources necessitates constant updates and adaptations of anonymization strategies.



Quote for the day:

"You may be disappointed if you fail, but you are doomed if you don't try." -- Beverly Sills

No comments:

Post a Comment