Daily Tech Digest - July 12, 2024

4 considerations to help organizations implement an AI code of conducts

Many organizations consider reinventing the wheel to accommodate AI tools, but this creates a significant amount of unnecessary work. Instead, they should subject any AI tool to the same rigorous procurement process that applies to any product that concerns data security. The procurement process must also take into consideration the organization’s privacy and ethical standards, to ensure these are never compromised in the name of new technology. ... It’s important to be conscious of the privacy policies of AI tools when using these in an enterprise environment — and be sure to only use these with a commercial license. To address this risk, an AI code of conduct should stipulate that free tools are categorically banned for use in any business context. Instead, employees should be required to use an approved, officially procured commercial license solution, with full privacy protections. ... Every organization needs to remain aware of how their technology vendors use AI in the products and services that they buy from them. To enable this, an AI code of conduct should also enforce policies to enable organizations to keep track of their vendor agreements.


From Microservices to Modular Monoliths

You know who really loves microservices? Cloud hosting companies like Microsoft, Amazon, and Google. They make a lot of money hosting microservices. They also make a lot of money selling you tools to manage your microservices. They make even more money when you have to scale up your microservices to handle the increased load on your system. ... So what do you do when you find yourself in microservice hell? How do you keep the gains you (hopefully) made in breaking up your legacy ball of mud, without having to constantly contend with a massively distributed system? It may be time to (re)consider the modular monolith. A modular monolith is a monolithic application that is broken up into modules. Each module is responsible for a specific part of the application. Modules can communicate with each other through well-defined interfaces. This allows you to keep the benefits of a monolithic architecture, while still being able to break up your application into smaller, more manageable pieces. Yes, you'll still need to deal with some complexity inherent to modularity, such as ensuring modules remain independent while still being able to communicate with one another efficiently. 


Deep Dive: Optimizing AI Data Storage Management

In an AI data pipeline, various stages align with specific storage needs to ensure efficient data processing and utilization. Here are the typical stages along with their associated storage requirements: Data collection and pre-processing: The storage where the raw and often unstructured data is gathered and centralized (increasingly into Data Lakes) and then cleaned and transformed into curated data sets ready for training processes. Model training and processing: The storage that feeds the curated data set into GPUs for processing. This stage of the pipeline also needs to store training artifacts such as the hyper parameters, run metrics, validation data, model parameters and the final production inferencing model. Inferencing and model deployment: The mission-critical storage where the training model is hosted for making predictions or decisions based on new data. The outputs of inferencing are utilized by applications to deliver the results, often embedded into information and automation processes. Storage for archiving: Once the training stage is complete, various artifacts such as different sets of training data and different versions of the model need to be stored alongside the raw data.


RAG (Retrieval Augmented Generation) Architecture for Data Quality Assessment

RAG is basically designed to leverage LLMs on your own content or data. It involves retrieving relevant content to augment the context or insights as part of the generation process. However, RAG is an evolving technology with both strengths and limitations. RAG integrates information retrieval from a dedicated, custom, and accurate knowledge base, reducing the risk of LLMs offering general or non-relevant responses. For example, when the knowledge base is tailored to a specific domain (e.g., legal documents for a law firm), RAG equips the LLM with relevant information and terminology, improving the context and accuracy of its responses. At the same time, there are limitations associated with RAG. RAG heavily relies on the quality, accuracy, and comprehensiveness of the information stored within the knowledge base. Incomplete, inaccurate or missing information or data can lead to misleading or irrelevant retrieved data. Overall, the success of RAG hinges on quality data. So, how are RAG models implemented? RAG has basically two key components: a retriever model and a generator model.
 

NoSQL Database Growth Has Slowed, but AI Is Driving Demand

As for MongoDB, it too is targeting generative AI use cases. In a recent post on The New Stack, developer relations team lead Rick Houlihan explicitly compared its solution to PostgreSQL, a popular open source relational database system. Houlihan contended that systems like PostgreSQL were not designed for the type of workloads demanded by AI: “Considering the well-known performance limitations of RDBMS when it comes to wide rows and large data attributes, it is no surprise that these tests indicate that a platform like PostgreSQL will struggle with the kind of rich, complex document data required by generative AI workloads.” Unsurprisingly, he concludes that using a document database (like MongoDB) “delivers better performance than using a tool that simply wasn’t designed for these workloads.” In defense of PostgreSQL, there is no shortage of managed service providers for Postgres that provide AI-focused functionality. Earlier this year I interviewed a “Postgres as a Platform” company called Tembo, which has seen a lot of demand for AI extensions. “Postgres has an extension called pgvector,” Tembo CTO Samay Sharma told me.


Let’s Finally Build Continuous Database Reliability! We Deserve It

While we worked hard to make sure our CI/CD pipelines are fast and learned how to deploy and test applications reliably, we didn’t advance our databases world. It’s time to get continuous reliability around databases as well. To do that, developers need to own their databases. Once developers take over the ownership, they will be ready to optimize the pipelines, thereby achieving continuous reliability for databases. This shift of ownership needs to be consciously driven by technical leaders. ... The primary advantage of implementing database guardrails and empowering developers to take ownership of their databases is scalability. This approach eliminates team constraints, unlocking their complete potential and enabling them to operate at their optimal speed. By removing the need to collaborate with other teams that lack comprehensive context, developers can work more swiftly, reducing communication overhead. Just as we recognized that streamlining communication between developers and system engineers was the initial step, leading to the evolution into DevOps engineers, the objective here is to eliminate dependence on other teams. 


Digital Transformation: Making Information Work for You

With information generated by digital transactions, the first goal is to ensure that the knowledge garnered does not get stuck between only those directly participating in the transaction. Lessons learned from the transaction should become part of the greater organizational memory. This does not mean that every single transaction needs to be reported to every person in the organization. It also doesn’t mean that the information needs to be elevated in the same form or at the same velocity to all recipients. Those participating in the transaction need an operational view of the transaction. This needs to happen in real time. The information is the enabler of the human-to-computer-to-human transaction and the speed of that information flow needs to be as quick as it was in the human-to-human transaction. Otherwise, it will be viewed as a roadblock instead of an enabler. As it escalates to the next level of management, the information needs to evolve to a managerial view. Managers are more interested in anomalies and outliers or data at a summary level. This level of information is no less impactful to the organizational memory but is associated with a different level of decision-making. 


Generative AI won’t fix cloud migration

The allure of generative AI lies in its promise of automation and efficiency. If cloud migration was a one-size-fits-all scenario, that would work. But each enterprise faces unique challenges based on its technological stack, business requirements, and regulatory environment. Expecting a generative AI model to handle all migration tasks seamlessly is unrealistic. ... Beyond the initial investment in AI tools, the hidden costs of generative AI for cloud migration add up quickly. For instance, running generative AI models often requires substantial computational resources, which can be expensive. Also, keeping generative AI models updated and secure demands robust API management and cybersecurity measures. Finally, AI models need continual refinement and retraining to stay relevant, incurring ongoing costs. ... Successful business strategy is about what works well and what needs to be improved. We all understand that AI is a powerful tool and has been for decades, but it needs to be considered carefully—once you’ve identified the specific problem you’re looking to solve. Cloud migration is a complex, multifaceted process that demands solutions tailored to unique enterprise needs. 


Navigating Regulatory and Technological Shifts in IIoT Security

Global regulations play a pivotal role in shaping the cybersecurity landscape for IIoT. The European Union’s Cyber Resilience Act (CRA) is a prime example, setting stringent requirements for manufacturers supplying products to Europe. By January 2027, companies must meet comprehensive standards addressing security features, vulnerability management, and supply chain security. ... The journey towards securing IIoT environments is multifaceted, requiring manufacturers to navigate regulatory requirements, technological advancements, and proactive risk management strategies. Global regulations like the EU’s Cyber Resilience Act set critical standards that drive industry-wide improvements. At the same time, technological solutions such as PKI and SBOMs play essential roles in maintaining the integrity and security of connected devices. By adopting a collaborative approach and leveraging robust security frameworks, manufacturers can create resilient IIoT ecosystems that withstand evolving cyber threats. The collective effort of all stakeholders is paramount to ensuring the secure and reliable operation of industrial environments in this new era of connectivity.


Green Software Foundation: On a mission to decarbonize software

One of the first orders of business in increasing awareness: getting developers and companies to understand what green software really is. Instead of reinventing the wheel, the foundation reviewed a course in the concepts of green software that Hussain had developed while at Microsoft. To provide an easy first step for organizations to take, the foundation borrowed from Hussain’s materials and created a new basic training course, “Principles of Green Software Engineering.” The training is only two or three hours long and level-sets students to the same playing field. ... When it comes to software development, computing inefficiencies (and carbon footprints) are more visible — bulky libraries for example — and engineers can improve it more easily. Everyday business operations, on the other hand, are a tad opaque but still contribute to the company’s overall sustainability score. Case in point: The carbon footprint of a Zoom call is harder to measure, Hussain points out. The foundation helped to define a Software Carbon Intensity (SCI) score, which applies to all business operations including software development and SaaS programs employees might use.



Quote for the day:

"Real leadership is being the person others will gladly and confidently follow." -- John C. Maxwell

Daily Tech Digest - July 11, 2024

Will AI Ever Pay Off? Those Footing the Bill Are Worrying Already

Though there is some nervousness around how long soaring demand can last, no one doubts the business models for those at the foundations of the AI stack. Companies need the chips and manufacturing they, and they alone, offer. Other winners are the cloud companies that provide data centers. But further up the ecosystem, the questions become more interesting. That’s where the likes of OpenAI, Anthropic and many other burgeoning AI startups are engaged in the much harder job of finding business or consumer uses for this new technology, which has gained a reputation for being unreliable and erratic. Even if these flaws can be ironed out (more on that in a moment), there is growing worry about a perennial mismatch between the cost of creating and running AI and what people are prepared to pay to use it. ... Another big red flag, economist Daron Acemoglu warns, lies in the shared thesis that by crunching more data and engaging more computing power, generative AI tools will become more intelligent and more accurate, fulfilling their potential as predicted. His comments were shared in a recent Goldman Sachs report titled “Gen AI: Too Much Spend, Too Little Benefit?”


How top IT leaders create first-mover advantage

“Some of the less talked about aspects of a high-performing team are the human traits: trust, respect, genuine enjoyment of each other,” Sample says. “I’m looking at experience and skills, but I’m also thinking about how the person will function collaboratively with the team. Do I believe they’ll have the best interest of the team at heart? Can the team trust their competency? Sample also says he focuses on “will over skill.” “Qualities like curiosity and craftsmanship are sustainable, flexible skills that can evolve with whatever the new ‘toy’ in technology is,” he says. “If you’re approaching work with that bounty of curiosity and that willing mindset, the skills can adapt.” ... Steadiness and calm from the leader create the kind of culture where people are encouraged to take risks and work together to solve big problems and execute on bold agendas. That, ultimately, is what enables a technology organization to capitalize on innovative technologies. In fact, reflecting on his legacy as a CIO, Sample believes it’s not really about the technology; it’s about the people. His success, he says, has been in building the teams that operate the technology.


Can AI be Meaningfully Regulated, or is Regulation a Deceitful Fudge?

The patchwork approach is used by federal agencies in the US. Different agencies have responsibility for different verticals and can therefore introduce regulations more relevant to specific organizations. For example, the FCC regulates interstate and international communications, the SEC regulates capital markets and protects investors, and the FTC protects consumers and promotes competition. ... The danger is that the EU’s recent monolithic AI Act will go the same way as GDPR. Kolochenko prefers the US model. He believes the smaller, more agile method of targeted regulations used by US federal agencies can provide better outcomes than the unwieldy and largely static monolithic approach adopted by the EU. ... To regulate or not to regulate is a rhetorical question – of course AI must be regulated to minimize current and future harms. The real questions are whether it will be successful (no, it will not), partially successful (perhaps, but only so far as the curate’s egg is good), and will it introduce new problems for AI-using businesses (from empirical and historical evidence, yes).


The Team Sport of Cloud Security: Breaking Down the Rules of the Game

Cloud security today is too complicated to fall on the shoulders of one person or party. For this reason, most cloud services operate on a shared responsibility model that divvies security roles between the CSP and the customer. Large players in this space, such as AWS and Microsoft Azure, have even published frameworks to the lines of liability in the sand. While the exact delineations can change depending on the service model ... However, while the expectations laid out in shared responsibility models are designed to reduce confusion, customers often struggle to conceptualize what this framework looks like in practice. And unfortunately, when there’s a lack of clarity, there’s a window of opportunity for threat actors. ... The best-case scenario for mitigating cloud security risks is when CSPs and customers are transparent and aligned on their responsibilities right from the beginning. Even the most secure cloud services aren’t foolproof, so customers need to be aware of what security elements they’re “owning” versus what falls in the court of their CSP. 


AI's new frontier: bringing intelligence to the data source

There has been a shift with organisations exploring how to bring AI to their data rather than uploading proprietary data to AI providers. This shift reflects a growing concern for data privacy and the desire to maintain control over proprietary information. Business leaders believe they can better manage security and privacy while still benefiting from AI advancements by keeping data in-house. Bringing AI solutions directly to an organisation’s data eliminates the need to move vast amounts of data, reducing security risks and maintaining data integrity. Crucially, organisations can maintain strict control over their data by implementing AI solutions within their own infrastructure to ensure that sensitive information remains protected and complies with privacy regulations. Additionally, keeping data in-house minimises the risks associated with data breaches and unauthorised access from third parties, providing peace of mind for both the organisation and its clients. Advanced AI-driven data management tools deliver this solution to businesses, automating data cleaning, validation, and transformation processes to ensure high-quality data for AI training.


How AI helps decode cybercriminal strategies

The biggest use case for AI is its ability to process, analyze, and interpret natural language communication efficiently. AI algorithms can quickly identify patterns, correlations, and anomalies within massive datasets, providing cybersecurity professionals with actionable insights. This capability not only enhances the speed and accuracy of threat detection but also enables a more proactive and comprehensive approach to securing organizations against dark web-originated threats. This is vital in an environment where the difference between detecting a threat early in the cyber kill chain vs once the attacker has achieved their objective can be hundreds of thousands of dollars. ... Another potential use case of AI is in quickly identifying and alerting specific threats relating to an organization, helping with the prioritization of intelligence. One thing an AI could look for in data is intention – to assess whether an actor is planning an attack, is asking for advice, is looking to buy or to sell access or tooling. Each of these indicates a different level of risk for the organization, which can inform security operations.


Widely Used RADIUS Authentication Flaw Enables MITM Attacks

The attack scenario - researchers say a "a well-resourced attacker" could make it practical - fools the Remote Authentication Dial-In User Service into granting access to a malicious user without the attacker having to know or guess a login password. Despite its 1990s heritage and reliance on the MD5 hashing algorithm, many large enterprises still use the RADIUS protocol for authentication to the VPN or Wi-Fi network. It's also "universally supported as an access control method for routers, switches and other network infrastructure," researchers said in a paper published Tuesday. The protocol is used to safeguard industrial control systems and 5G cellular networks. ... For the attack to succeed, the hacker must calculate a MD5 collision within the client session timeout, where the common defaults are either 30 seconds or 60 seconds. The 60-second default is typically for users that have enabled multifactor authentication. That's too fast for the researchers, who were able to reduce the compute time down to minutes from hours, but not down to seconds. An attacker working with better hardware or cloud computing resources might do better, they said.


Can RAG solve generative AI’s problems?

Currently, RAG offers probably the most effective way to enrich LLMs with novel and domain-specific data. This challenge is particularly important for such systems as chatbots, since the information they generate must be up to date. However, RAG cannot reason iteratively, which means it is still dependent on the underlying dataset (knowledge base, in RAG’s case). Even though this dataset is dynamically updated, if the information there isn’t coherent or is poorly categorized and labeled, the RAG model won’t be able to understand that the retrieval data is irrelevant, incomplete, or erroneous. It would also be naive to expect RAG to solve the AI hallucination problem. Generative AI algorithms are statistical black boxes, meaning that developers do not always know why the model hallucinates and whether it is caused by insufficient or conflicting data. Moreover, dynamic data retrieval from external sources does not guarantee there are no inherent biases or disinformation in this data. ... Therefore, RAG is in no way a definitive solution. In the case of sensitive industries, such as healthcare, law enforcement, or finance, fine-tuning LLMs with thoroughly cleaned, domain-specific datasets might be a more reliable option.


Navigating the New Data Norms with Ethical Guardrails for Ethical AI

To convert ethical principles into a practical roadmap, businesses need a clear framework aligned with industry standards and company values. Also, beyond integrity and fairness, businesses must demonstrate tangible ROI by focusing on metrics like customer acquisition cost, lifetime value, and employee engagement. Operationalizing ethical guardrails involves creating a structured approach to ensure AI deployment aligns with ethical standards. Companies can start by fostering a culture of ethics through comprehensive employee education programs that emphasize the importance of fairness, transparency, and accountability. Establishing clear policies and guidelines is crucial, alongside implementing robust risk assessment frameworks to identify and mitigate potential ethical issues. Regular audits and continuous monitoring should be part of the process to ensure adherence to these standards. Additionally, maintaining transparency for end-users by openly sharing how AI systems make decisions, and providing mechanisms for feedback, further strengthens trust and accountability.
 

How CIOs Should Approach DevOps

CIOs should have a vision for scaling DevOps across the enterprise for unlocking its full range of benefits. A collaborative culture, automation, and technical skills are all necessary for achieving scale. Besides these, the CIO needs to think about the right team structure, security landscape, and technical tools that will take DevOps safely from pilot to production to enterprise scale. It is recommended to start small: dedicate a small platform team focused only on building a platform that enables automation of various development tasks. Build the platform in small steps, incrementally and iteratively. Put together another small team with all the skills required to deliver value to customers. Constantly gather customer feedback and incorporate it to improve development at every stage. Ultimately, customer satisfaction is what matters the most in any DevOps program. Security needs to be part of every DevOps process right from the start. When a process is automated, so should its security and compliance aspects. Frequent code reviews and building awareness among all the concerned teams will help to create secure, resilient applications that can be scaled with confidence.



Quote for the day:

“There is no failure except in no longer trying.” -- Chris Bradford

Daily Tech Digest - July 10, 2024

How platform teams lead to better, faster, stronger enterprises

Platform teams are uniquely equipped to optimize resource allocation because they sit in between developers and the cloud infrastructure and compute that developers need, and are able to maximize the efficiency and effectiveness of software development processes. With their unique set of skills and expertise, they effectively collaborate with other teams, including developers, data scientists, and operations teams, to accurately understand their needs and pain points. Using a product approach, platform teams remove barriers for developers and operations teams by offering shared services for developer self-service, enabling faster modernization within organizational boundaries and automation to simplify the management of applications and Kubernetes clusters in the cloud. Fostering a culture of innovation, platform teams play a crucial role in keeping the organization at the forefront of emerging trends and technologies. This enables enterprises to provide innovative solutions that set them apart in the market.


Developing An AI Uuse Policy

An AI Use Policy is designed to ensure that any AI technology used by your business is done so in a safe, reliable and appropriate manner that minimises risks. It should be developed to inform and guide your employees on how AI can be used within your business. ... Perhaps the most important part for the majority of your employees, set specific do’s and don’ts for inputs and outputs. This is to ensure compliance with data security, privacy and ethical standards. For example, “Don’t input any company confidential, commercially sensitive or proprietary information”, “Don’t use AI tools in a way that could inadvertently perpetuate or reinforce bias” and “Don’t input any customer or co-worker’s personal data”. For outputs, guidance can reiterate to staff the potential for misinformation or ‘hallucinations’ generated by AI. Consider rules such as “Clearly label any AI generated content”, “Don’t share any output without careful fact-checking” or “Make sure that a human has the final decision when using AI to help make a decision which could impact any living person


Synergy between IoT and blockchain transforming operational efficiency

The synergy between the two technologies is integral to achieving Industry 4.0 goals, including digital transformation, decentralised connectivity, and smart industry advancements. Via this integration, organisations can achieve real-time visibility into production operations, optimise supply chain processes, and enhance overall efficiency. ... In regulated industries like pharmaceutical manufacturing, where compliance is crucial, integrating IoT and Blockchain lets companies onboard suppliers to upload raw material info, batch numbers, and quality checks to a blockchain ledger. IoT devices automate data acquisition during manufacturing and storage, ensuring data integrity and transparency. In smart city ecosystems, local authorities share data with service providers for waste management, traffic updates, and more. Traffic data from sensors can be securely uploaded to a blockchain, where third-party services like food delivery and ridesharing can access it to optimise operations. Logistics companies use IoT systems to gather data on location and handling, which is uploaded to a blockchain ledger to track goods, estimate delivery time, and provide real-time updates.


Ignore Li-ion fire risks at your peril

Li-ion batteries are prone to destructive and hard-to-control fires. There have been several reported incidents in data centers, some of which have led to serious outages, but they are not well-documented or systematically studied. ... A commonly held view is that Li-ion’s fire risk in the data center is overstated, partly as a result of marketing by vendors of alternative chemistries such as salt and nickel-zinc. If these products are promoted as a “safe” alternative, then it will (it is speculated) create a perception that Li-ion is “unsafe.” After assessing the evidence, examining the science, and hearing from data center operators at recent member meetings, Uptime Institute is taking a cautious and practicable stance at this point. While it is true that Li-ion batteries have a higher risk of fire compared with other chemistries, and these fires are particularly problematic, Uptime Institute engineers do not think Li-ion batteries should be rejected out of hand. ... Data center builders and operators should carefully consider the benefits of Li-ion batteries alongside the risks. As well as the obvious risk of serious fires, there are financial and reputational risks in preparing for, avoiding, and responding to such incidents.


More than a CISO: the rise of the dual-titled IT leader

Dual-title roles give CISOs new levers to work with and more scope to drive strategic integration and alignment of cybersecurity within the organization. ... Belknap finds having his own team of engineers puts him in a stronger position when working with partners. When looking for support or assistance with a project, his team will have already built something, reducing the amount of work needed from the partner team. “This means we can lean on them to be responsible for the things that only they can do. I don’t have to pull them into the work that only I can do or the work that’s not aligned to their expertise,” he says. These dual-title roles also recognize how CISOs are increasingly operating as technology leaders and operators of the organization, according to Adam Ely, head of digital products at Fidelity Investments who was formerly the firm’s CISO and has a long history in security. Ely says that as CISOs typically work across an organization, know how the business lines work, and are day-to-day leaders of people and technology as well as crisis managers, it stands them in good stead for dual-title or more senior positions. 


You Can’t Wish Away Technology Complexity

Every business succeeds because of technology. Every person gets paid by technology. The value of our currency itself is about technology. Of course, it is not only about technology. But tell that to the CFO or CLO. When it is about finance, there is very little pushback in saying it is all about money. When it is about legal, there is no push-back about it being about law. I’ve noticed only technologists pull back and say, “You’re right, it’s not about technology.” ... See what people often forget that technology complexity is cool on multiple levels. It gives us the ability to make different choices for stakeholders and customers (I mean real customers not stakeholders that think they are customers – note to business stakeholders, you and I get our paychecks from the same place, you are not my customer. Our customer is my customer). But while this complexity allows for choice, it also creates a dependency on understanding those choices. Or a dependency on a professional who does. I don’t pretend to understand medicine. That is why I ask doctors what to do.


Electronic Health Record Errors Are a Serious Problem

The exposure of healthcare records, in even minor ways, leaves patients highly vulnerable. “I never reached out to this woman [whose records were entered into my father’s], but I had all her contact information. I could have gone to her house and handed her the copy of the results I had found in my dad’s records,” Hollingsworth says. ... Data aggregators pose a further risk. These organizations may collect deidentified data to perform analyses on population-level health issues for both healthcare organizations and insurance companies. “Are they following the same security standards that we follow in the health care transaction world?” Ghanayem asks. “I don’t know.” ... Clear distinctions between important information fields must be made to cut down on adjacency errors. Concise patient summaries at the beginning of each record and usable search features may increase usability and decrease frustration that leads to the introduction of errors. And refining when alerts are issued can decrease alert fatigue, which may lead providers to simply ignore alerts even when they are valid.


Diversifying cyber teams to tackle complex threats

To make a significant change and deliver a more diverse cyber workforce, we need to focus on leadership and change our language and processes for recruitment. This takes courage and is the biggest challenge organizations face. Having a diverse team helps others see it is a place for them. It isn’t just about attracting talent; it’s also about openness and retaining talent. Organizations need to help individuals from diverse backgrounds to see themselves as role models who need to be out shouting about the opportunities within the sector. Diversity fosters a sense of belonging and inclusivity making the cybersecurity field more attractive to a wider range of individuals. When potential recruits see relatable role models within a team, it breaks down the traditional and somewhat homogenous perception of cybersecurity. This inclusivity is crucial for attracting talent from underrepresented groups, particularly women and minority groups, who may not have traditionally seen themselves in cybersecurity roles. A diverse team with strong role models creates a positive feedback loop. 


Nanotechnology and SRE: Pioneering Precision in Performance

Nanotechnology offers the opportunity to transform SRE at the atomic level — addressing individual tasks, subtasks, and tickets. For example, extra-sensitive nanosensors can continuously monitor system performance metrics, including temperature, voltage, and processing load. When placed in data centers, these sensors enable real-time data collection and analysis, detecting electrical and mechanical issues before they escalate and extending the lifespan of technological components. Nanobots can be programmed to address hardware issues and routine maintenance tasks. Together, these technologies can integrate into a self-healing and continuously improving system in line with SRE principles. ... Nanotechnology can potentially transform SRE, leading to enhanced system reliability and performance. Nanotechnology-enabled solutions can allow more precise monitoring, optimization, and real-time improvements, supporting the key pillars of SRE. At the same time, the foundational principles of SRE can be applied to ensure the reliability of advanced nanotechnology systems. 


Three Areas Where AI Can Make a Huge Difference Without Significant Job Risk

Doing a QC job can be annoying because even though the job is critical to the outcome, your non-QC peers and management treat you like a potentially avoidable annoyance. You stand in the way of shipping on time and at volume, potentially delaying or even eliminating performance-based bonuses. We are already discovering that to assure the quality of an AI-driven coding effort, a second AI is needed to assure the quality of the result because people just don’t like doing QC on code, particularly those who create it. ... In short, properly applied AI could highlight and help address problems that are critically reducing a company’s ability to perform to its full potential and preventing it from becoming a great place to work. ... Calculating an employee’s contribution and then using it to set compensation transparently should significantly reduce the number of employees who feel they are being treated unfairly by eliminating that unfairness or by showing them a path to improve their value and thus positively impact their pay.



Quote for the day:

"When you stop chasing the wrong things you give the right things a chance to catch you." -- Lolly Daskal

Daily Tech Digest - July 09, 2024

AI stack attack: Navigating the generative tech maze

Successful integration often depends on having a solid foundation of data and processing capabilities. “Do you have a real-time system? Do you have stream processing? Do you have batch processing capabilities?” asks Intuit’s Srivastava. These underlying systems form the backbone upon which advanced AI capabilities can be built. For many organizations, the challenge lies in connecting AI systems with diverse and often siloed data sources. Illumex has focused on this problem, developing solutions that can work with existing data infrastructures. “We can actually connect to the data where it is. We don’t need them to move that data,” explains Tokarev Sela. This approach allows enterprises to leverage their existing data assets without requiring extensive restructuring. Integration challenges extend beyond just data connectivity. ... Security integration is another crucial consideration. As AI systems often deal with sensitive data and make important decisions, they must be incorporated into existing security frameworks and comply with organizational policies and regulatory requirements.


How to Architect Software for a Greener Future

Firstly, it’s a time shift, moving to a greener time. You can use burstable or flexible instances to achieve this. It’s essentially a sophisticated scheduling problem, akin to looking at a forecast to determine when the grid will be greenest—or conversely, how to avoid peak dirty periods. There are various methods to facilitate this on the operational side. Naturally, this strategy should apply primarily to non-demanding workloads. ... Another carbon-aware action you can take is location shifting—moving your workload to a greener location. This approach isn’t always feasible but works well when network costs are low, and privacy considerations allow. ... Resiliency is another significant factor. Many green practices, like autoscaling, improve software resilience by adapting to demand variability. Carbon awareness actions also serve to future-proof your software for a post-energy transition world, where considerations like carbon caps and budgets may become commonplace. Establishing mechanisms now prepares your software for future regulatory and environmental challenges.


Evaluating board maturity: essential steps for advanced governance

Most boards lack a firm grasp of fundamental governance principles. I'd go so far as to say that 8 or 9 out of 10 boards could be described this way. Your average board director is intelligent and respected within their communities. But they often don't receive meaningful governance training. Instead, they follow established board norms without questioning them, which can lead to significant governance failures. Consider Enron, Wells Fargo, Volkswagen AG, Theranos, and, recently, Boeing—all have boards filled with recognized experts. However inadequate oversight caused or allowed them to make serious and damaging errors. This is most starkly illustrated by Barney Frank, co-author of the Dodd-Frank Act (passed following the 2008 financial crisis) and a board member of Silicon Valley Bank while it collapsed. Having brilliant board members doesn't guarantee effective governance. The point is that, for different reasons, consultants and experts can 'misread' where a board is at. Frankly, this is most often due to just being lazy. But sometimes it is due to just not being clear about what to look for.


Mastering Serverless Debugging

Feature flags allow you to enable or disable parts of your application without deploying new code. This can be invaluable for isolating issues in a live environment. By toggling specific features on or off, you can narrow down the problematic areas and observe the application’s behavior under different configurations. Implementing feature flags involves adding conditional checks in your code that control the execution of specific features based on the flag’s status. Monitoring the application with different flag settings helps identify the source of bugs and allows you to test fixes without affecting the entire user base. ... Logging is one of the most common and essential tools for debugging serverless applications. I wrote and spoke a lot about logging in the past. By logging all relevant data points, including inputs and outputs of your functions, you can trace the flow of execution and identify where things go wrong. However, excessive logging can increase costs, as serverless billing is often based on execution time and resources used. It’s important to strike a balance between sufficient logging and cost efficiency. 


Implementing Data Fabric: 7 Key Steps

As businesses generate and collect vast amounts of data from diverse sources, including cloud services, mobile applications, and IoT devices, the challenge of managing, processing, and leveraging this data efficiently becomes increasingly critical. Data fabric emerges as a holistic approach to address these challenges by providing a unified architecture that integrates different data management processes across various environments. This innovative framework enables seamless data access, sharing, and analysis across the organization irrespective of where the data resides – be it on-premises or in multi-cloud environments. The significance of data fabric lies in its ability to break down silos and foster a collaborative environment where information is easily accessible and actionable insights can be derived. By implementing a robust data fabric strategy, businesses can enhance their operational efficiency, drive innovation, and create personalized customer experiences. Implementing a data fabric strategy involves a comprehensive approach that integrates various Data Management and processing disciplines across an organization.


Empowering Self-Service Users in the Digital Age

Ultimately, portals must strike the balance between freedom and control, which can be achieved by ensuring flexibility with role-based access control. Granting end users the freedom to deploy within a secure framework of predefined permissions creates an environment ripe for innovation within a robustly protected environment. This means users can explore, experiment and innovate without concerns about security boundaries or unnecessary hurdles. But of course, as with any project, organizations can’t afford to build something and consider that job done. Measuring success is ongoing. Metrics such as how often the portal is accessed, who uses what, which service catalogs are used and how the portal usage should be tracked, along with other relevant data will help point to any areas that need improvement. It is also important to remember that it is collaborative work between the platform team and end users. And in technology, there is always room for improvement. For instance, recent advances in AI/ML could soon be leveraged to analyze previously inaccessible datasets and generate smarter and faster decision-making.


Desperate for power, AI hosts turn to nuclear industry

As opposed to adding new green energy to meet AI’s power demands, tech companies are seeking power from existing electricity resources. That could raise prices for other customers and hold back emission-cutting goals, according The Wall Street Journal and other sources. According to sources cited by the WSJ, the owners of about one-third of US nuclear power plants are in talks with tech companies to provide electricity to new data centers needed to meet the demands of an artificial-intelligence boom. ... “The power companies are having a real problem meeting the demands now,” Gold said. “To build new plants, you’ve got to go through all kinds of hoops. That’s why there’s a power plant shortage now in the country. When we get a really hot day in this country, you see brownouts.” The available energy could go to the highest bidder. Ironically, though, the bill for that power will be borne by AI users, not its creators and providers. “Yeah, [AWS] is paying a billion dollars a year in electrical bills, but their customers are paying them $2 billion a year. That’s how commerce works,” Gold said.


Fake network traffic is on the rise — here’s how to counter it

“Attempting to homogenize the bot world and the potential threat it poses is a dangerous prospect. The fact is, it is not that simple, and cyber professionals must understand the issue in the context of their own goals...” ... “Cyber professionals need to understand the bot ecosystem and the resulting threats in order to protect their organizations from direct network exploitation, indirect threat to the product through algorithm manipulation, and a poor user experience, and the threat of users being targeted on their platform,” Cooke says. “As well as [understanding] direct security threats from malicious actors, cyber professionals need to understand the impact on day-to-day issues like advertising and network management from bot profiles as a whole,” she adds. “So cyber professionals must ensure that the problem is tackled holistically, protecting their networks, data and their users from this increasingly sophisticated threat. Measures to detect and prevent malicious bot activity must be built into new releases, and cyber professionals should act as educational evangelists for users to help them help themselves with a strong awareness of the trademarks of fake traffic and malicious profiles.” 


Researchers reveal flaws in AI agent benchmarking

Since calling the models underlying most AI agents repeatedly can increase accuracy, researchers can be tempted to build extremely expensive agents so they can claim top spot in accuracy. But the paper described three simple baseline agents developed by the authors that outperform many of the complex architectures at much lower cost. ... Two factors determine the total cost of running an agent: the one-time costs involved in optimizing the agent for a task, and the variable costs incurred each time it is run. ... Researchers and those who develop models have different benchmarking needs to those downstream developers who are choosing an AI to use their applications. Model developers and researchers don’t usually consider cost during their evaluations, while for downstream developers, cost is a key factor. “There are several hurdles to cost evaluation,” the paper noted. “Different providers can charge different amounts for the same model, the cost of an API call might change overnight, and cost might vary based on model developer decisions, such as whether bulk API calls are charged differently.”


10 ways to prevent shadow AI disaster

Shadow AI is practically inevitable, says Arun Chandrasekaran, a distinguished vice president analyst at research firm Gartner. Workers are curious about AI tools, seeing them as a way to offload busy work and boost productivity. Others want to master their use, seeing that as a way to prevent being displaced by the technology. Others became comfortable with AI for personal tasks and now want the technology on the job. ... shadow AI could cause disruptions among the workforce, he says, as workers who are surreptitiously using AI could have an unfair advantage over those employees who have not brought in such tools. “It is not a dominant trend yet, but it is a concern we hear in our discussions [with organizational leaders],” Chandrasekaran says. Shadow AI could introduce legal issues, too. ... “There has to be more awareness across the organization about the risks of AI, and CIOs need to be more proactive about explaining the risks and spreading awareness about them across the organization,” says Sreekanth Menon, global leader for AI/ML services at Genpact, a global professional services and solutions firm. 



Quote for the day:

“In matters of principle, stand like a rock; in matters of taste, swim with the current. ” -- Thomas Jefferson

Daily Tech Digest - July 08, 2024

How insurtech startups are addressing the challenges of slow processes in the insurance sector

Even though compliance and regulation are critical for the security of both the insurers and customers, the regulatory process could be quite long. Compliance requirements demand meticulous attention to detail and can significantly prolong the approval process for new products and services. Another factor can be risk aversion. It (risk aversion) within the industry fosters a culture of caution, where insurers are hesitant to embrace change and experiment with new approaches to product development and underwriting. ... One of the solutions for these industrial challenges lies in the collaboration of the insurance sector and the latest technologies. Insurtech solutions offer myriad innovative tools and technologies that promise to streamline product development and automate underwriting processes. One such solution gaining traction is artificial intelligence (AI) and machine learning algorithms, which can analyse vast amounts of data in real time to assess risk and expedite underwriting decisions. 


Transforming Business Practices Through Augmented Intelligence

While AI raises apprehensions about potential job displacement, viewing it solely as a threat overlooks its capacity to enhance human capabilities, as evidenced by historical technological advancements. Training and education play a key role in this process, as AI has become an integral part of our reality and must be harnessed to its full potential. It is essential to align the use of artificial intelligence with the overall strategy of the organization for smooth integration of applications with data, processes, and collaboration between stakeholders. In a landscape where the internet simplifies transactions, software provides tools, and AI leverages data to make informed decisions, training and education become crucial. ... At its core, technology has always revolved around processing data. When viewed through the lens of enterprise architecture, an AI-powered machine learning tool can adeptly craft roadmaps tailored for businesses. Through advanced AI analytics, automation, and recommendation systems, enterprise architecture facilitates more informed and expedited decision-making processes.


Request for proposal vs. request for partner: what works best for you?

An RFProposal is an efficient choice when the nature of the work is standardized, while an RFPartner is the better choice when the buying organization is seeking a strategic partner for the overall best fit to meet its needs. ... When organizations shift to wanting to find a partner with the best possible solution, it’s important to understand the nature of the selection criteria change. With an RFPartner, buyers evaluate suppliers not only based on technical capabilities but also on the best value of the solution. ... “On the surface, an RFPartner sounds like a heavy lift, but we find that the overall time and effort is about the same,” he says. “In an RFProposal, the buyer is spending more time upfront defining the specs and in contentious negotiations. The RFPartner process flips this on its head and creates a more integrated bid solution that generates better solutions, spending more time together with the supplier co-creating, especially if your aim is making the shift to a highly collaborative vested business model to achieve strategic business outcomes.”


If you’re a CISO without D&O insurance, you may need to fight for it

D&O insurance covers the personal liabilities of corporate directors and officers in the event of incidents that lead to financial losses, reputational damage, or legal consequences. Without adequate D&O coverage, CISOs are left vulnerable, highlighting the need for this in an organization’s risk-management strategy. ... Lisa Hall, CISO at privately held Safebase, agrees that CISOs at all companies should be covered under their organizations’ D&O insurance policies, particularly in light of these new regulations. “I do think adding CISOs to D&O insurance will be more and more of a thing, and there is, for sure, more chatter in my CISO groups about how companies are handling this,” she says. “A lot of CISOs are also taking out errors and omissions insurance personally. I have that just for the consulting and advisory work I do.” ... “A lot of CISOs are thinking about this, especially after SolarWinds,” she says. “And if we feel that we’re not 100% protected for any decision we make, and we can be personally liable for a breach or possible incident even if we do the right thing, it’s really pushing CISOs to say, ‘Hey, company, I’ll join if you cover me or give me a different title.’ “


How DORA is fortifying Europe’s financial future with a new take on operational resilience

For DORA, digital operational resilience very simply means “the ability of a financial entity to build, assure, and review its operational integrity and reliability by ensuring, either directly or indirectly through the use of services provided by ICT third-party service providers, the full range of ICT-related capabilities needed to address the security of the network and information systems which a financial entity uses, and which support the continued provision of financial services and their quality, including throughout disruptions”. Developing on this statement in a conversation with FinTech Futures, Simon Treacy, a senior associate at global law firm Linklaters, describes DORA as “a very prescriptive framework for financial entities, primarily to build and improve the way that they manage ICT risk”. “It applies very broadly across the EU regulated financial sector,” he continues, “and really part of its aim is to harmonise standards so that the smallest payments firm is subject to the same rules for operational resilience as the biggest banks and insurers.”


Data Sprawl: Continuing Problem for the Enterprise or an Untapped Opportunity?

Data fabric technologies excel in integrating and managing data across various environments. However, they often focus on conventional data sources like databases, data lakes, or data warehouses. The result is a gap in integrating and extracting value from data residing in numerous SaaS applications, as they may not seamlessly fit into these traditional data repositories. The combined solution of data fabric and iPaaS can address complex business challenges, such as integrating data from SaaS applications with traditional data sources. This capability is particularly valuable in today’s business landscape, where data is increasingly scattered across various cloud and on-premises environments. The merging of data fabric and iPaaS technologies offers a groundbreaking solution to this challenge, opening the door to new opportunities in data management and analysis. The integration of data fabric with iPaaS addresses the complexity and expertise-dependency in iPaaS. Data fabric can enable users to discover, understand, and verify data before integration flows are built. 


AI’s moment of disillusionment

AI, whether generative AI, machine learning, deep learning, or you name it, was never going to be able to sustain the immense expectations we’ve foisted upon it. I suspect part of the reason we’ve let it run so far for so long is that it felt beyond our ability to understand. It was this magical thing, black-box algorithms that ingest prompts and create crazy-realistic images or text that sounds thoughtful and intelligent. And why not? The major large language models (LLMs) have all been trained on gazillions of examples of other people being thoughtful and intelligent, and tools like ChatGPT mimic back what they’ve “learned.” ... We go through this process of inflated expectations and disillusionment with pretty much every shiny new technology. Even something as settled as cloud keeps getting kicked around. My InfoWorld colleague, David Linthicum, recently ripped into cloud computing, arguing that “the anticipated productivity gains and cost savings have not materialized, for the most part.” I think he’s overstating his case, but it’s hard to fault him, given how much we (myself included) sold cloud as the solution for pretty much every IT problem.


How nation-state cyber attacks disrupt public services and undermine citizen trust

While nation-states do have advanced capabilities and visibility that are hard or impossible for cyber criminals to replicate, the general strategy for attackers is to target vulnerable perimeter devices such as VPNs or firewalls as an entry point to the network. Next they focus on obtaining privileged credentials while leveraging legitimate software to masquerade as normal activity while they scout the environments for valuable data or large repositories to disrupt. It’s important to note that the commonly exploited vulnerabilities in government IT systems are not distinctly different from the vulnerabilities exploited more broadly. Government IT systems are often extremely diverse and thus, subject to a variety of exploits. ... Currently, there are numerous policies and regulations, both domestically and internationally, which are inconsistent and vary in their requirements. These administrative requirements take significant resources which could otherwise be used to strengthen a company’s cybersecurity program. 


How Quantum Computing Will Revolutionize Cloud Analytics

As we peer into the future of quantum computing in cloud analytics, the emphasis on collaboration and continuous innovation becomes undeniable. Integrating quantum technologies with cloud systems is not just a technological upgrade but a paradigm shift requiring robust partnerships across academia, industry, and government sectors. For instance, IBM’s quantum network includes over 140 members, including start-ups, research labs, and educational institutions, working together to advance quantum computing. This collaborative model is essential because the challenges in quantum computing are not just about hardware or software alone but about creating an ecosystem that supports an entirely new kind of computing. That ecosystem comprises components such as quantum hardware development, quantum algorithms, software tools, and educational resources. Also, it has made significant achievements, such as developing quantum hardware such as the IBM Quantum System One, advancing quantum algorithms for practical applications in chemistry and materials science, and creating the Qiskit software development kit to make quantum programming more accessible.


How continuous learning is reshaping the workforce

Gone are the days when lengthy training programs were sought after and people took breaks from their careers to pick up an upskilling program. Navpreet Singh highlights that upskilling will become an ongoing process integrated into the workday. “The focus will shift from acquiring specific job skills to fostering adaptability and lifelong learning. Critical thinking, problem-solving, and creativity will be paramount as automation takes over routine tasks. Traditional ways of learning may not always reflect the skills needed. Alternative credentials, like badges and micro-credentials, will showcase the specific skills employees possess, making them more competitive. By embracing this future of upskilling, we can ensure our workforce is adaptable, future-proof, and ready to drive innovation in the ever-evolving automotive industry,” explains Singh. Within the next decade or so, we will see greater demand for agile ed-tech tools that help employees learn on the go and prepare them for new roles, says Daniele Merlerati, Chief Regional Officer APAC, Baltics, Benelux at Gi Group Holding.



Quote for the day:

"Perseverance is failing nineteen times and succeeding the twentieth." -- Julie Andrews

Daily Tech Digest - July 07, 2024

How Good Is ChatGPT at Coding, Really?

A study published in the June issue of IEEE Transactions on Software Engineering evaluated the code produced by OpenAI’s ChatGPT in terms of functionality, complexity and security. The results show that ChatGPT has an extremely broad range of success when it comes to producing functional code—with a success rate ranging from anywhere as poor as 0.66 percent and as good as 89 percent—depending on the difficulty of the task, the programming language, and a number of other factors. While in some cases the AI generator could produce better code than humans, the analysis also reveals some security concerns with AI-generated code. ... Overall, ChatGPT was fairly good at solving problems in the different coding languages—but especially when attempting to solve coding problems that existed on LeetCode before 2021. For instance, it was able to produce functional code for easy, medium, and hard problems with success rates of about 89, 71, and 40 percent, respectively. “However, when it comes to the algorithm problems after 2021, ChatGPT’s ability to generate functionally correct code is affected. It sometimes fails to understand the meaning of questions, even for easy level problems,” Tang notes.


What can devs do about code review anxiety?

A lot of folks reported that either they would completely avoid picking up code reviews, for example. So maybe someone's like, “Hey, I need a review,” and folks are like, “I'm just going to pretend I didn't see that request. Maybe somebody else will pick it up.” So just kind of completely avoiding it because this anxiety refers to not just getting your work reviewed, but also reviewing other people's work. And then folks might also procrastinate, they might just kind of put things off, or someone was like, “I always wait until Friday so I don't have to deal with it all weekend and I just push all of that until the very last minute.” So definitely you see a lot of avoidance. ... there is this misconception that only junior developers or folks just starting out experience code review anxiety, with the assumption that it's only because you're experiencing the anxiety when your work is being reviewed. But if you think about it, anytime you are a reviewer, you're essentially asked to contribute your expertise and so there is an element of, “If I mess up this review, I was the gatekeeper of this code. And if I mess it up, that might be my fault.” So there's a lot of pressure there.
 

Securing the Growing IoT Threat Landscape

What’s clear is that there should be greater collective responsibility between stakeholders to improve IoT security outlooks. A multi-stakeholder response is necessary, leading to manufacturers prioritising security from the design phase, to governments implementing legislation to mandate responsibility. Currently, some of the leading IoT issues relate to deployment problems. Alex suggests that IT teams also need to ensure default device passwords are updated and complex enough to not be easily broken. Likewise, he highlights the need for monitoring to detect malicious activity. “Software and hardware hygiene is essential, especially as IoT devices are often built on open source software, without any convenient, at scale, security hardening and update mechanisms,” he highlights. “Identifying new or known vulnerabilities and having an optimised testing and deployment loop is vital to plug gaps and prevent entry from bad actors.” A secure-by-design approach should ensure more robust protections are in place, alongside patching and regular maintenance. Alongside this, security features should be integrated from the start of the development process.


Beyond GPUs: Innatera and the quiet uprising in AI hardware

“Our neuromorphic solutions can perform computations with 500 times less energy compared to conventional approaches,” Kumar stated. “And we’re seeing pattern recognition speeds about 100 times faster than competitors.” Kumar illustrated this point with a compelling real-world application. ... Kumar envisions a future where neuromorphic chips increasingly handle AI workloads at the edge, while larger foundational models remain in the cloud. “There’s a natural complementarity,” he said. “Neuromorphics excel at fast, efficient processing of real-world sensor data, while large language models are better suited for reasoning and knowledge-intensive tasks.” “It’s not just about raw computing power,” Kumar observed. “The brain achieves remarkable feats of intelligence with a fraction of the energy our current AI systems require. That’s the promise of neuromorphic computing – AI that’s not only more capable but dramatically more efficient.” ... As AI continues to diffuse into every facet of our lives, the need for more efficient hardware solutions will only grow. Neuromorphic computing represents one of the most exciting frontiers in chip design today, with the potential to enable a new generation of intelligent devices that are both more capable and more sustainable.


Artificial intelligence in cybersecurity and privacy: A blessing or a curse?

AI helps cybersecurity and privacy professionals in many ways, enhancing their ability to protect systems, data, and users from various threats. For instance, it can analyse large volumes of data, spot anomalies, and identify suspicious patterns for threat detection, which helps to find unknown or sophisticated attacks. AI can also defend against cyber-attacks by analysing and classifying network data, detecting malware, and predicting vulnerabilities. ... The harmful effects of AI may be fewer than the positive ones, but they can have a serious impact on organisations that suffer from them. Clearly, as AI technology advances, so do the strategies for both protecting and compromising digital systems. Security professionals should not ignore the risks of AI, but rather prepare for them by using AI to enhance their capabilities and reduce their vulnerabilities. ... As attackers are increasingly leveraging AI, integrating AI defences is crucial to stay ahead in the cybersecurity game. Without it, we risk falling behind.” Consequently, cybersecurity and privacy professionals, and their organisations, should prepare for AI-driven cyber threats by adopting a multi-faceted approach to enhance their defences while minimising risks and ensuring ethical use of technology.


Intel is betting big on its upcoming Lunar Lake XPUs to change how we think of AI in our PCs

Designed with power efficiency in mind, the Lunar Lake architecture is ideal for portable devices such as laptops and notebooks. These processors balance performance and efficiency by integrating Performance Cores (P-cores) and Efficiency Cores (E-cores). This combination allows the processors to handle both demanding tasks and less intensive operations without draining the battery. The Lunar Lake processors will feature a configuration of up to eight cores, split equally between P-cores and E-cores. This design aims to improve battery life by up to 60 per cent, positioning Lunar Lake as a strong competitor to ARM-based CPUs in the laptop market. Intel anticipates that these will be the most efficient x86 processors it has ever developed. ... A major highlight of the Lunar Lake processors is the inclusion of the new Xe2 GPUs as integrated graphics. These GPUs are expected to deliver up to 80 per cent better gaming performance compared to previous generations. With up to eight second-generation Xe-cores, the Xe2 GPUs are designed to support high-resolution gaming and multimedia tasks, including handling up to three 4K displays at 60 frames per second with HDR.


Cyber Threats And The Growing Complexity Of Cybersecurity

Irvine envisions a future where the cybersecurity industry undergoes significant disruption, with a greater emphasis on data-driven risk management. “The cybersecurity industry is going to be disrupted severely. We start to think about cybersecurity more as a risk and we start to put more data and more dollars and cents around some of these analyses,” she predicted. As the industry matures, Dr. Irvine anticipates a shift towards more transparent and effective cybersecurity solutions, reducing the prevalence of smoke and mirrors in the marketplace. She also claims that “AI and LLM's will take over jobs. There will be automation, and we're going to need to upskill individuals to solve some of these hard problems. It's just a challenge for all of us to figure out how.” Kosmowski also remarked that the industry must remain on top of what will continue to be a definitive risk to organizations, “Over 86% of companies are hybrid and expect to remain hybrid for the foreseeable future, plus we know IT proliferation is continuing to happen at a pace that we have never seen before.”


The blueprint for data center success: Documentation and training

In any data center, knowledge is a priceless asset. Documenting configurations, network topologies, hardware specifications, decommissioning regulations, and other items mentioned above ensures that institutional knowledge is not lost when individuals leave the organization. So, no need to panic once the facility veteran retires, as you’ll already have all the information they have! This information becomes crucial for staff, maintenance personnel, and external consultants to understand every facet of the systems quickly and accurately. It provides a more structured learning path, facilitates a deeper understanding of the data center's infrastructure and operations, and allows facilities to keep up with critical technological advances. By creating a well-documented environment, facilities can rest assured knowing that authorized personnel are adequately trained, and vital knowledge is not lost in the shuffle, contributing to overall operational efficiency and effectiveness, and further mitigating future risks or compliance violations.


Why Knowledge Is Power in the Clash of Big Tech’s AI Titans

The advanced AI models currently under development across big tech -- models designed to drive the next class of intelligent applications -- must learn from more extensive datasets than the internet can provide. In response, some AI developers have turned to experimenting with AI-generated synthetic data, a risky proposition that could potentially put an entire engine at risk if even a small semblance of the learning model is inaccurate. Others have pivoted to content licensing deals for access to useful, albeit limited, proprietary training data. ... The real differentiating edge lies in who can develop a systemic means of achieving GenAI data validation, integrity, and reliability with a certificated or “trusted” designation, in addition to acquiring expert knowledge from trusted external data and content sources. These two twin pillars of AI trust, coupled with the raw computing and computational power of new and emerging data centers, will likely be the markers of which big tech brands gain the immediate upper hand.


Should Sustainability be a Network Issue?

The beauty of replacing existing network hardware components with energy-efficient, eco-friendly, small form factor infrastructure elements wherever possible is that no adjustments have to be made to network configurations and topology. In most cases, you're simply swapping out routers, switches, etc. The need for these equipment upgrades naturally occurs with the move to Wi-Fi 6, which requires new network switches, routers, etc., in order to run at full capacity. Hardware replacements can be performed on a phased plan that commits a portion of the annual budget each year for network hardware upgrades ... There is a need in some cases to have discrete computer networks that are dedicated to specific business functions, but there are other cases where networks can be consolidated so that resources such as storage and processing can be shared. ... Network managers aren’t professional sustainability experts—but local utility companies are. In some areas of the U.S., utility companies offer free onsite energy audits that can help identify areas of potential energy and waste reduction.



Quote for the day:

"It takes courage and maturity to know the difference between a hoping and a wishing." -- Rashida Jourdain