Daily Tech Digest - October 07, 2023

No Need to Have a 'FOBO' for AI

It is a well- known fact that before AI takes your job, someone using AI will take it. To stay relevant in the job market, it is then absolutely essential to adopt AI and automation tools to enhance one's productivity to ensure that his or her job is not rendered obsolete. Here are some strategies, which will help one stay ahead of the curve and be able to effectively compete and thrive in the fast paced and dynamic world of employment. ... Being Human: Human beings have evolved over centuries of evolution to become a superior race and embracing human emotions like empathy, gratitude, compassion, zeal to strive for the betterment of our fellow human beings will always keep us ahead of the game. This is what distinguishes us from machines. Interdisciplinary skills: Consider developing skills across multiple disciplines and combining them will make one more versatile and valuable to the employers. Problem Solving: It cannot be understated more, that problem solving and our ability to think critically to solve the complex problems around us will make us stay ahead of the machines. 


Driving Digital Transformation Through Model-Based Systems Engineering

Digital engineering is revolutionizing important areas such as the health care industry. From sophisticated imaging devices and robotic surgical systems to telemedicine platforms that connect doctors and patients across vast distances, each of these systems depends on the integration of numerous complex components, and each must operate seamlessly to ensure optimal performance. A key approach that relates systems engineering to digital transformation and digital engineering is model-based systems engineering (MBSE). Whereas traditional systems engineering relies on document-based approaches to support systems engineering activities (e.g., text-based requirements and design documents), MBSE does so by relying on digital system models instead. In essence, MBSE supports traditional systems engineering. It doesn’t replace it; rather, it offers an approach that aims to make systems engineering more efficient. 


Optimize Your Observability Spending in 5 Steps

You can’t use an observability agent on its own to put these steps into practice. Agents are simply neutral forwarders, sending out information to be processed downstream in the observability analysis tools. You could implement some of these steps using open source tools and in-house development, but this comes with increased operational cost and complexity, requiring your team to build expertise that is not core to your business. Overall, the main challenge with putting these steps into practice is that the available tools are either like agents, which simply send information, or like observability tools, which simply receive it. You need to be able to process telemetry data in stream, to be able to transform and route it as it passes from agent to tool, to optimize and shape it for your downstream requirements. Our Mezmo Telemetry Pipelines were conceived with the goal of helping organizations get better control of their data in stream. This approach enables you to control the flow between your data sources and your observability tools, and manage in detail the optimization of your data before it arrives downstream.


Why AI Regulations Are Needed to Check Risk and Misuse

Adopting a new technology poses certain risks, especially if it has not been previously deployed. That calls for certain risk mitigation strategies, such as testing, sandboxing, proof of concepts, and taking smaller steps such as minimum viable product, before complete adoption. Mahadevan believes there will always be risks and that we "amplify the risk" to a large extent today. "Companies need to follow a framework and put together a risk mitigation panel, rather than focus on the risk itself. I insist that AI and the risk mitigation should become a part of the blueprint. And this is not a job for a CIO alone, it is a job for a CHRO, the risk manager, and for operations," Mahadevan said. Deep fake and the violation of one's privacy is a hotly debated topic in the industry today. Thomas said deep fake will lead to many scams, causing victims to lose a lot of money. It is also a violation of one's privacy, and poses a substantial risk at an individual level. Deep fake technology uses a form of artificial intelligence called deep learning to create convincing videos, photo or audio clips of a subject, which are used for misinformation campaigns or to defraud/deceive relatives or friends.


New kind of quantum computer made using high-resolution microscope

It is unlikely to compete any time soon with the leading approaches to quantum computing, including those adopted by Google and IBM, as well as by many start-up companies. But the tactic could be used to study quantum properties in a variety of other chemical elements or even molecules, say the researchers who developed it. At some level, everything in nature is quantum and can, in principle, perform quantum computations. The hard part is to isolate quantum states called qubits — the quantum equivalent of the memory bits in a classical computer — from environmental disturbances, and to control them finely enough for such calculations to be achieved. Andreas Heinrich at the Institute for Basic Science in Seoul and his collaborators worked with nature’s ‘original’ qubit — the spin of the electron. Electrons act like tiny compass needles, and measuring the direction of their spin can yield only two possible values, ‘up’ or ‘down’, which correspond to the ‘0’ and ‘1’ of a classical bit. 


Net-zero carbon data centers: Expanding capacity amid evolving policy and regulation

The sting in the tail for data center developers, is that emissions associated with the IT process load are now to be included in the calculation. Given that the annual energy consumption of even a modestly sized facility could run to hundreds of thousands of megawatt hours (MWh), this represents a very substantial cost for developers – unless they can drive their on-site emissions down below the 35 percent threshold. Outside of London, there is currently no policy for carbon offsetting, but it seems likely that other local authorities will follow London’s lead and introduce similar schemes in the future. In some regions, particularly the Nordic’s, planning policy has been introduced requiring new data centers to provide waste heat to local district heating infrastructure, or to be ‘heat network ready’ for connection to future schemes. Whilst a policy of promoting heat reuse may not lead to a direct reduction in data center emissions, it is seen as an important step towards decarbonizing the wider community, by displacing other, more carbon intensive, sources of heat.


6 Key Personality Traits for Disruptive Innovation Leaders

“Disruptive innovators require a mindset focused on leapfrogging – creating or doing something radically new or different that produces a significant leap forward,” said Hightech Partners. “Disruptive leaders ensure that everything they do adds value to the market.” ... For companies, it is important that leaders understand how to continually push the limits of their teams, organizations, and partners. Some believe that disruptive leaders should also push boundaries. “Leaders who travel a lot, surrounding themselves with diverse people and entrepreneurs, are able to continually expand their mindset and creative problem solving abilities,” said the report. ... Disruptive leaders manage incredible levels of uncertainty. “Adaptive planning is an approach where actions lead to results and leaders take the opportunity to reflect on and learn from these actions and results,” said Hightech Partners. “Then, they can modify their assumptions and approaches accordingly.” ... The word “normal” doesn’t exist in a disruptive leader’s vocabulary, says the report. “Once something has become normal, it’s probably obsolete,” said Hightech Partners. 


Enterprise architecture creating sustainable business value

“If you imagine a company with a C-suite in the penthouse and the IT department maybe in the basement, and then the business department somewhere in between, enterprise architects are able to ride the elevator and they have the capability to exit the elevator on every floor. And they are also able to move around on that floor in a very free manner. “They do have their own office somewhere. Mostly it's on the floor where the IT department is, but they're barely in their office because they're constantly sitting in other people's offices to communicate, collaborate, bring together and enable people – riding the elevator up and down. ... “Business fluency and an understanding of how a business works, as well as the ability to have a holistic perspective on a complex problem, is crucial. It is important to not only look at one aspect, but also consider how that aspect might influence another aspect. That is also something that enterprise architects are trained for like nobody else. Therefore, I believe that the success of holistic sustainability will be a discipline of enterprise architecture.”


Achieving Scalable, Agile, and Comprehensive Data Management and Governance

“Data governance in general is fairly uneven,” he explained. “In terms of protecting sensitive data, there’s been improvement, though. Organizations have been more willing to shut down risky programs that may expose sensitive data even at the expense of losing competitive advantage rather than run afoul of regulations.” As a sign of this improvement, he added, 73% of survey respondents said they were at least somewhat successful at meeting their regulatory and compliance objectives. Another key concern Stodder discussed was the highly distributed nature of today’s data environment. “Creating data silos goes hand in hand with data democratization,” he said. “Forty-one percent of our survey respondents said managing data silos was one of their top three challenges.” To address this, he said, many are turning to solutions such as data virtualization, data fabrics, or data meshes. He also added that the research showed roughly 30% as already using data virtualization and about the same number planning to.


Global Cyberespionage Operations Surging, Microsoft Warns

Microsoft reports that when it comes to cyber operations and intelligence gathering, nominal allies target each other. Despite last month's meeting between Russian President Vladimir Putin and North Korean hereditary dictator Kim Jong Un, Pyongyang continues to run Moscow-focused espionage operations, especially focused on "nuclear energy, defense and government policy intelligence collection." Alongside the risk posed by nation-state groups, the threat posed by criminals also continues to intensify. "Ransomware‐as‐ a-service and phishing-as-a-service are key threats to businesses, and cybercriminals have conducted business email compromise and other cybercrimes, largely undeterred by the increasing commitment of global law enforcement resources," Burt said. Microsoft said that from September 2022 through July, it saw the number of human-operated or "hands on keyboard" ransomware attacks double compared to less sophisticated, fully automated attacks. Since last November, it said, it saw the number of security incidents that appeared to lead to data exfiltration double.



Quote for the day:

''Success is a state of mind. If you want success start thinking of yourself as a sucess." -- Joyce Brothers

Daily Tech Digest - October 06, 2023

Cloud infrastructure spending is growing

Although I love to be right about the strong cloud spending, that does not mean it’s suitable for all enterprises. Indeed, the trend will be to overspend, even after net-new finops deployments that closely monitor where the dollars are spent. We must focus on accountability, automation, and discipline around allocating and paying for cloud resources. I suspect many cloud deployments are hugely underoptimized and need a tune-up. Even though some of this shared infrastructure spending is unavoidable, CIOs need to review how the spending occurs and look for opportunities to save dollars without reducing the value generated by these systems. I suggest companies consider all other options, such as bringing some processing into enterprise data centers. Those prices have been falling while they have been stable or rising on the public cloud side. Also, many systems function in isolation and don’t benefit much from existing within a public cloud. Simple storage is one example, and many enterprises are putting those systems on-premises these days.


BAs are responsible for creating new models that support business decisions by working closely with finance and IT teams to establish initiatives and strategies aimed at improving revenue and/or optimizing costs. Business analysts need a “strong understanding of regulatory and reporting requirements as well as plenty of experience in forecasting, budgeting, and financial analysis combined with understanding of key performance indicators,” according to Robert Half Technology. ... Business analysts need to know how to pull, analyze and report data trends, share that information with others, and apply it to business goals and needs. Not all business analysts need a background in IT if they have a general understanding of how systems, products, and tools work. Alternatively, some business analysts have a strong IT background and less experience in business but are interested in shifting away from IT into this hybrid role. The role often acts as a communicator between the business and IT sides of the organization, so having extensive experience in either area can be beneficial for business analysts.


AI Needs Data More Than Data Needs AI

While data plays a foundational role in AI, the reverse is not true. Data doesn't inherently need AI to exist or be valuable. Data, in various forms, has been collected and analyzed for centuries without the need for sophisticated AI algorithms. Data on its own can provide valuable insights and inform decision-making processes. Therefore, organizations should not blindly chase the AI hype at the cost of ignoring the importance of data management and data quality. The role of AI is to take the computation and insights of good quality data to the next level and not necessarily attempt to fix the decades-old data management processes. ... While AI relies heavily on data for its operation and evolution, data can benefit from AI in several ways. Data Management: AI can help automate data management tasks, making it easier to process, clean and organize large datasets. Predictive Insights: AI can uncover patterns and insights in data that may not be immediately apparent to humans, enhancing the value of the data.


Enterprises see AI as a worthwhile investment

Despite prior industry research indicating that 90% of AI initiatives fail to produce substantial ROI and roughly half never leave the prototype stage, the overwhelming majority of respondents to this survey (92%) find business value from their models in production and 66% feel their models have delivered results that are outstanding or exceed expectations. Common use cases for AI among these leading-edge organizations include personalizing the customer experience, fraud detection, optimizing sales and marketing and improving real-time decision making. Their success of this group offers a basic roadmap that other organizations should consider when developing their own best practices, including: Approach: A majority of responding organizations have a robust, defined approach and a dedicated team for monitoring ML models in production. In fact among larger enterprises, 71% have at least 100 people working in ML while over half have more than 250. 


5 Strategies for Cloud Security in Health Care

Adopting data security in the cloud doesn’t mean merely uploading patient data to S3 and enabling encryption. There are many security controls that need to be in place before a single patient record is migrated. For instance, there is particular concern about data security on medical devices and wireless body area networks (devices that are embedded in a patient’s body). Obviously, it’s vital to secure such devices from exploits. When running services on the cloud, you should review all relevant data privacy considerations and encryption controls, including data encryption, public-key encryption, identity-based encryption, identity-based broadcast encryption and attribute-based encryption. Then adopt a framework for achieving secure and controlled identity access using federation (like OpenID Connect, which is not the same as OpenID, or SAML). Finally, you should ensure that monitoring and audit controls are in place to maintain confidentiality. You should also have an incident response plan in place to handle crisis scenarios in the event of an incident. 


Financial Institutions Turn to AI and Cloud to Solve Data Challenges

In data management, the potential uses of GenAI, powered by large language models, has been recognised by many financial institutions, including State Street. For instance, it can help in the cross-mapping of datasets, the classifying of data and more generalist applications such as summarising reports and responding to plain English inquiries. ... The Alpha platform uses GenAI with Snowflake as a strategic partner providing the data foundation of the platform. Snowflake’s cloud-native architecture streamlines data sharing and governance, enables faster time to market for data-centric applications, and offers a rich environment of AI and machine learning-based capabilities for data scientists, quants and engineers. “Every few years, the technology landscape re-sets, creating a small window of opportunity that in turn enables a giant leap in innovation; GenAI is the opportunity that will define the new set of industry leaders over the next decade,” State Street Executive Vice President and Chief Architect Aman Thind tells A-Team Group.


Building data center networks for GenAI fabric enablement

Building GenAI data centers from a network perspective differs greatly from traditional data center buildouts -- or even those that were designed to support high-performance computing (HPC). ... After all, the pace of a GenAI application is only as fast as its slowest component. If properly built, the network can be eliminated as a potential performance bottleneck. Building a highly scalable network is also key to GenAI data centers as it enables future growth capacity. Network switch fabrics must include hardware that can expand horizontally and vertically, as well as use network OSes on switching hardware that include advanced features, such as packet spraying, load awareness and intelligent traffic redirection. These features provide automated rerouting of traffic within the network and between GPU processing units that may become overloaded. ... Early GenAI adopters have concluded that the use of multisite or micro data centers is the best option to accommodate this level of density. And, yet again, this puts pressure on the network interconnecting these sites to be as high-performing and resilient as possible.


Breach Roundup: Still Too Much ICS Exposed on the Internet

Apple responded to an actively exploited zero-day flaw in iOS and iPadOS on Wednesday with the release of security patches. The identified vulnerability, tracked as CVE-2023-42824, exists in the kernel and may allow an attacker to elevate privileges. "Apple is aware of a report that this issue may have been actively exploited against versions of iOS before iOS 16.6," the company said. The update also addresses CVE-2023-5217, a WebRTC component issue. WebRTC is an open-source project that supports real-time computing between browsers and mobile applications, powering uses such as video and voice calling. ... Sony Interactive Entertainment alerted around 6,800 individuals about a cybersecurity breach. The intrusion resulted from an unauthorized party exploiting a zero-day vulnerability, tracked as CVE-2023-34362, in the MOVEit file transfer platform. This critical-severity SQL injection flaw, leading to remote code execution, was used by the Clop ransomware gang in widespread attacks in late May. 


8 Ways to Combat Ageism in Your Job Search

Workplace experts say candidates can combat this by showing what efforts they've made to quickly pick up new skills and show enthusiasm for future learning. That might mean enrolling in extra training courses, getting new certifications and highlighting them in your résumé or interview, North said. Younger workers may need to show that they have taken proactive measures to learn new job skills they may lack. Older workers may want to show that they can keep up with fast-paced environments and various tech tools. ... "If you don't have to input this information, don't volunteer it," he said, adding that phrases like 40-plus years of experience also may not be best. Instead, stick to your skills and experiences. If you lack experience in one area, show how your skills are transferrable for this specific job. You can also be clear about any kind of transition, like a career change, or gap in employment by placing it in an executive summary section at the top of your résumé, Freeman said. Quantify your previous work's impact with numbers or qualify it by explaining how it affected the results.


Ransomware Crisis, Recession Fears Leave CISOs in Tough Spot

With a new ransomware target being attacked every 14 seconds, organizations must prioritize ransomware prevention. With its developing sophistication, mitigating ransomware is increasingly more challenging. There's no silver bullet to eradicate attacks, and having to operate in a tight market adds a layer of complexity. CISOs and security leaders must focus on the best return on investment while building out a multilayered approach for improving their overall IT security. One strategy to accomplish this is managing attack vectors using encrypted channels with preventive technologies that can stop adversaries before they have a chance to compromise networks or while they are executing their multistep campaigns. ... Ransomware gangs also take advantage of legitimate websites encrypted with SSL/TLS to look secure, but have been infected with drive-by downloads. And cybercriminals leech onto browser vulnerabilities that can lead to infection when the entry point is encrypted, allowing encrypted threats embedded with malicious payloads to go unnoticed.



Quote for the day:

“People are not lazy. They simply have important goals – that is, goals that do not inspire them.” -- Tony Robbins

Daily Tech Digest - October 05, 2023

AI and Overcoming User Resistance

If users are concerned, and even worried about AI, it could lead to user resistance, which is a dynamic that IT pros are familiar with from their history of implementing new systems that alter business processes, require employee retraining, and may even change employee jobs. So, are process change and user resistance any different when you introduce AI? I would argue yes. You’re not just retraining an employee on a new set of steps for processing an invoice or taking an order. You’re actually introducing an automated thinking process into what an employee has been doing. Now, technology is going to make or recommend decisions that the employee used to make. This can lead to employees experiencing a loss of empowerment and control. ... This is exactly the “sweet spot” that companies (and IT) should aim for with AI projects: an environment where everyone sees beneficial value from AI, and where no one feels disenfranchised. This is an achievable environment if users are engaged early in business process redefinition and in how AI will work. 


Eyes everywhere: How to safely navigate the IoT video revolution

Users are rightfully wary of bringing even more cameras into their homes and offices. The good news is that they, too, can protect their camera-enabled devices with some simple steps. First, customize. This includes changing default usernames and passwords, updating the device’s firmware and software, and staying informed about the latest security threats. This is a simple yet effective way to create a barrier between yourself and would-be hackers. Next, take it to the edge. Processing and storing data at the edge instead of the cloud is another surefire way to protect your endpoints. After all, by storing the information under your own lock and key, you can be sure about who can access it and how. Users also benefit from reduced latency by storing the information closer to home, which is particularly important with heavy video feeds. Finally, buy trusted brands. Attack surfaces are only as strong as their weakest link. So, chose companies that have a proven track record when it comes to privacy and security. 


Why HTTP Caching Matters for APIs

In some caching strategies, especially for dynamic resources, the cache can store not only the complete response but also the individual elements or changes that make up the response. This approach is known as “delta caching” or “incremental caching.” Instead of sending the complete response, delta caching sends only the changes or updates made to the cached version of the resource. ... Delta caching is particularly useful for scenarios where resources change frequently, but the changes are relatively small compared to the complete resource. For example, in a collaborative document editing application, delta caching can be employed to send only the changes made by a user to a shared document, instead of sending the entire document every time it is updated. ... Caching enhances application resilience by reducing the risk of service disruptions during periods of high demand. By serving cached responses, even if the backend servers experience temporary performance issues, the application can continue to respond to a significant portion of requests from the cache. The caching layer acts as a buffer between the backend servers and the clients.


Author Talks: How to speak confidently when you’re put on the spot

People become nervous for many reasons. More than 75 percent of people report being nervous in high-stakes communication, be it planned or spontaneous. Past experience could be a factor, as well as high stakes and the importance of the goals you’re trying to achieve. Those of us who study this at an academic level believe that the nervousness is wired into being human. We see this across all cultures. We see it develop typically in the early teen years and progress from there. There’s an evolutionary component to it. One of the most helpful tips is normalizing the anxiety that you feel. You’re not alone. ... My anxiety management plan has three steps. The first thing I do is hold something cold in the palms of my hand before I speak. That cools me down. Secondly, I say tongue twisters to warm up my voice and also to get myself in the moment. Third, I remind myself, “I am in service of my audience. I am here to help them.” That really gets me other-focused rather than self-focused. That’s my anxiety management plan. I encourage everybody to find a plan that works for them.


Dell customizes GenAI and focuses on data lakehouse

Being able to fine tune as well as train generative AI is a process that relies on data, lots and lots of data. For enterprise use cases, that data isn’t just generic data taken from a public source, but rather is data that an organization already has in its data centers or cloud deployments and is likely also spread across multiple locations. To help enable enterprises to fully benefit from data for generative AI, Dell is building out an open data lakehouse platform. The data lakehouse concept is one that was originally pioneered by Databricks, as a way of enabling organizations to more easily query data stored in cloud object storage based data lakes. The Dell approach is a bit more nuanced in that it is taking a hybrid approach to data, with a goal of being able to query data across on-premises as well as mutli-cloud deployments. Greg Findlen, senior VP data management at Dell explained during the press briefing that the open data lakehouse will be able to use Dell storage and compute capabilities as well as multi-cloud storage. 


Don’t try running with data before you can walk

In South Africa, data governance tends to be a grudge investment based on regulatory issues. However, organisations that don’t do the basics well, and don’t have mature data governance and established frameworks in place, may well find they are spending on analytics technologies that don’t live up to expectations. What stands in the way of getting governance right? Firstly, it’s not easy. It involves all stakeholders across all domains. It may require a mindset change, and users may need to learn to use new technology. Secondly, it can be expensive, and it may take time before the organisation sees the value of it. One of the biggest problems is that the value of data governance investments is difficult to quantify in monetary terms. ... Data products should be supported by the entire CDO capability – including the CDO, data owners and data stewards – as well as IT, to ensure the data products will add the required business value. Owners and stewards need to identify and curate the required data for the products, while also ensuring good quality data and metadata management to make it more usable for broader business.


Yes, Software Development is an Assembly Line, but not Like That

Manufacturing engineers produce assembly lines and manufacturing processes that can produce those units of value. Software engineers are largely the same, also producing systems and processes that deliver units of value. The manufactured widget of software is actually the discrete user interactions with those features and pieces of software, not the features themselves. The assembly line in software engineering isn’t, as many think, the engineers producing features. ... Systems like Total Quality Management, which are focused on driving a cultural mindset of continuous improvement and an entire company focused on providing very low defect rates, easily translate to customer satisfaction in software organizations. Just to pick on TQM a bit, if we were to adapt it to software, we would focus on the number of times users are impacted by a defect more than the number of open bugs. Instead of tracking the number of defects and searching for more, we would be tracking the number of users who either failed to receive the promised value from the product or had severely diminished value.


Cloud Services Without Servers: What’s Behind It

“The basic idea of serverless computing has been around since the beginning of cloud computing. However, it has not become widely accepted,” explains Samuel Kounev, who heads the JMU Chair of Computer Science II (Software Engineering). But a shift can currently be observed in the industry and in science, the focus is increasingly moving towards serverless computing. A recent article in the Communications of the ACM magazine of the Association for Computing Machinery (ACM) deals with the history, status and potential of serverless computing. Among the authors are Samuel Kounev and Dr. Nikolas Herbst, who heads the JMU research group “Data Analytics Clouds”. ... “NoOps” is the first, which stands for “no operations”. This means, as described above, that the technical server management, including the hardware and software layers, is completely in the responsibility of the cloud provider. The second principle is “utilisation-based billing”, which means that only the time during which the customer actively uses the allocated computing resources is billed. 


7 sins of software development

Some software development issues can be fixed later. Building an application that scales efficiently to handle millions or billions of events isn’t one of them. Creating effective code with no bottlenecks that surprise everyone when the app finally runs at full scale requires plenty of forethought and high-level leadership. It’s not something that can be fixed later with a bit of targeted coding and virtual duct tape. The algorithms and data structures need to be planned from the beginning. That means the architects and the management layer need to think carefully about the data that will be stored and processed for each user. When a million or a billion users show up, which layer does the flood of information overwhelm? How can we plan ahead for those moments? Sometimes this architectural forethought means killing some great ideas. Sometimes the management layer needs to weigh the benefits with the costs of delivering a feature at scale. Some data analysis just doesn’t work well at large scale. Some formulas grow exponentially with more users. 


Organizations grapple with detection and response despite rising security budgets

For better understanding and evaluation, the study was able to categorize the responding organizations into "secure creators" and "prone enterprises." The grouping was done on the basis of the number of solutions used, the adoption of emerging technologies, and the use of technologies to simplify their automation environments. The study found that secure creators are more satisfied with their approach to cybersecurity, experience fewer cybersecurity incidents, and can detect and respond to incidents quicker. About 70% of them are early adopters of emerging technologies. The secure creators are also more focused on extracting the most value from specific advanced solutions, with 62% already using or in the late stages of implementing AI/ML solutions, as compared to only 45% of the prone enterprises. "When it comes to technology, the more clutter an organization has in its armory, the harder it is to pick up signals and get on top of issues quickly," Watson said.



Quote for the day:

"You’ll never achieve real success unless you like what you’re doing." -- Dale Carnegie

Daily Tech Digest - October 04, 2023

The Big Threat to AI: Looming Disruptions

As if semiconductor supply chain issues weren’t enough of a problem for AI production, other supply chains are piling on the challenges. "AI is software and open-source code makes up 90% of most codebases, which means the open source software supply chain has just as much, if not more, impact on AI production than regulated hardware components,” says Feross Aboukhadijeh, founder and CEO of Socket. The impact is potentially widespread given there are many open source AI models and tools on the market today and more are coming. ... There are numerous efforts afoot to relieve these concerns and secure a prime slice of the AI market pie. For what corporation does not envy Nvidia right now? “Many countries are trying to increase their piece of the global supply chain capacity and/or to onshore as much as possible through subsidies and other incentives. This has spurred significant investment and activity, but it remains to be seen whether these investments will address the supply chain problems in a timely or appropriate manner,” says Almassy.


When to Scale and When Not to Scale

Scaling is a nuanced decision in the agile journey, bridging the demands of complexity and rapid market needs. While the lure of scaling promises greater coordination, efficient handling of product intricacies, and swifter market responses, it's pivotal to approach it judiciously. It's not just about expanding teams or implementing frameworks; it's about recognizing when the product's complexity or market dynamics truly warrant a scaled approach. On the flip side, scaling without a clear strategy can introduce unforeseen challenges. From the inadvertent hiring of too many junior roles to the formation of functional silos, scaling can sometimes complicate rather than streamline. Additionally, foundational elements, such as a firm grasp of agile practices and automation, can determine the success of scaling endeavors. In essence, scaling is a tool in the agile toolkit—powerful when used correctly but potentially counterproductive if misapplied. Organizations must reflect on their unique scenarios, understanding both the promises and pitfalls of scaling, to ensure they chart a path that genuinely enhances agility, efficiency, and value delivery.


From Big Data to Better Data: Ensuring Data Quality with Verity

High-quality data is necessary for the success of every data-driven company. It enables everything from reliable business logic to insightful decision-making and robust machine learning modeling. It is now the norm for tech companies to have a well-developed data platform. This makes it easy for engineers to generate, transform, store, and analyze data at the petabyte scale. As such, we have reached a point where the quantity of data is no longer a boundary. Yet this has come at the cost of quality. ... Poor data quality in Hive caused tainted experimentation metrics, inaccurate machine learning features, and flawed executive dashboards. These incidents were hard to troubleshoot, as we had no unified approach to assessing data quality and no centralized repository for results. This delay increased the difficulty and cost of data backfills. The lack of centralization in data quality also made the data discovery process inefficient, making it hard for data scientists and data engineers to identify trustworthy data.


AI vs software outsourcing: An opportunity or a threat?

As AI becomes more widespread, the question is whether programmers write code themselves or have chatbots write it. Customers usually expect quality. If AI can help deliver this quality faster, why not? Look, everyone knows that there is a programming language called Java. There are Apache Commons libraries. You can Google it, but can you do something with it? Can you bring value to the business? This is the point. LLM models are a tool, just like a library or a framework. However, it has other capabilities that need to be mastered and used to bring value. It will be a long time before AI can replace developers because there will always be something that needs to be fixed. Either it's an error in the code or something wrong with the configuration. For example, if a bot has already written code that seems to work, but an error appears. The developer can spend little time writing the code but later spends more time looking for the error. Let's take GitHub Copilot. Programmers note that the acceptance rate of suggestions from Copilot is up to 40%. 


Why all IT talent should be irreplaceable

“Great employee” is easy to type. It’s less easy to define. Here’s a short list to get you started. Scrub it by discussing the question with your leadership team. The habit of success: Some employees seemingly don’t know how to fail. Give them an assignment and they’ll figure out a way to get it done. Competence: As a general rule, it’s better to apologize for an employee’s bad manners than for their inability to do the work. Without competence, employees with a strong success habit can do a lot of damage by, for example, creating kludges instead of sustainable solutions. Followership: Leadership is a prized attribute for employees to have. Prized, that is, if they’re leading in their leader’s direction. Otherwise, if you and they are leading in different directions, all your prized leaders will do is generate conflict and confusion. Followership is what happens when they embrace the direction you’re setting and make it their own. Intellectual honesty: Some employees can be persuaded with evidence and logic. Others trust their guts instead. That’s a physiological error. You want people who digest with their intestines but think with their brains.


Do you need both cloud architects and cloud engineers?

We need a collaborative approach with both disciplines. One cannot function properly without the other. For example, I cannot design multicloud-based systems that define different usages for different cloud services on different clouds. ... Many assume that the engineering tasks are the easiest part of the journey to the cloud. After all, if the cloud architect is good, the configuration should work, and it’s just a matter of using sound AI tools to carry out deployment. Even worse, some companies are working just with engineers and hiring specific skills. The company may pick a cloud brand and hire security, application, data, and AI engineers in that cloud platform. They assume that this specific cloud platform is the correct and optimized platform, which will usually cause trouble. Oh, the solutions may work, but it could cost 10 times more to operate. Not surprisingly, these companies have an underoptimized architecture since they’ve given zero consideration to architecture or the use of cloud architects. AI won’t save you from needing a good architecture and a good set of engineering disciplines. 


What IT needs to know about energy-efficiency directives for data centers

New regulations springing up in various regions will be among the drivers of data center sustainability in the months ahead. There are two main groups of regulations emerging that affect data center operations, according to Jay Dietrich, research director of sustainability at Uptime Institute. One is financial reporting modeled on the Task Force for Climate-related Financial Disclosures (TCFD), which requires reporting on energy consumption and efficiency and associated greenhouse gas (GHG) emissions. The other is the European Energy Efficiency Directive (EED), which requires an energy management plan, an energy audit, and reporting of operational data. In addition, there are voluntary, country-specific standards and siting requirements for data center efficiency and operations in various countries around the world, Dietrich says. A current example of a TCFD-related regulation is the E.U. Corporate Sustainability Reporting Directive (CSRD), with reporting requirements rolling out from large to small enterprises beginning in 2025 and continuing until 2028.


What does leadership in a hybrid world look like?

Firms want their best people to stick around and give more of themselves. Studies have shown that improved employee collaboration and alignment with a common purpose is key to achieving that. But what is the best way to make that happen in the way we now wish to work and live our lives? Some suggest that the emergence of generative AI and new work tools can improve productivity regardless of the workplace setting. But perhaps a different, more human, approach is needed? The profound loosening of relationships that employees have with their firm and one another, requires a similarly fundamental reimagining of the role of the leader itself. Ultimately, this will not come through new technology, systems, processes, or HR policy (however well-crafted), but through the actions and behaviours of credible and engaging people managers. Firms need to re-establish a sense of cohesion and that needs people who are exceptional good at doing just that. Businesses can’t just issue ultimatums or mandates; they need a leadership approach that “coheres” employees to feel less remote from one another and the firm.


Six skills you need to become an AI prompt engineer

Prompt engineering is much more of a collaborative conversation than an exercise in programming. Although LLMs are certainly not sentient, they often communicate in a way that's similar to how you'd communicate with a co-worker or subordinate. When you're defining your problem statements and queries, you will often have to think outside the box. The picture you have in your head may not translate to the internal representation of the AI. You'll need to be able to think about a variety of conversational approaches and different gambits to get the results you want. ... While you might not necessarily be expected to write the full application code, you will provide far more value if you can write some code, test your prompts in the context of the apps you're building, run debug code, and overall be part of the interactive programming process. It will be much easier for a team to move forward if the prompt engineering occurs as an integral part of the process, rather than having to add it in and test it as a completely separate operation.


The Cost Dynamics of Multitenancy

Isolating tenants with infrastructure has a higher initial cost, especially as you discover the right size for tenant workloads. Once you understand the cost for a tenant, it provides a very stable cost per tenant. Any unevenness in the cost profile represents a choice of timing. For example, if you use containers per tenant, you must decide when to commission your next cluster. Software-based multitenancy has an early advantage as it keeps the initial product price low. The marginal economics of onboarding a tenant are very low — almost zero. There comes a point when the initial design can no longer manage the load. The first port of call is vertical scaling — adding more power to the infrastructure to handle the load. This increases the cost per tenant but enables further tenants to be added. Eventually, you run out of vertical scaling options and look to horizontal scaling. This requires more investment as you need to handle load balancing, re-architect stateful interactions and introduce technologies such as shared cache.



Quote for the day:

"When you stop chasing the wrong things you give the right things a chance to catch you." -- Lolly Daskal

Daily Tech Digest - October 03, 2023

How AI can be a ‘multivitamin supplement’ for many industries

It won’t replace humans in the same way that supplements don’t replace a healthy diet. Still, it will strengthen companies’ existing operations and fill in the gaps that are currently making work more burdensome for human laborers. ... It’s exciting to realize that there will soon be professions that we don’t even have names for yet. As the technology ages and matures and governing bodies create the necessary laws and regulations, our current state of uncertainty will transform into an exciting, bright new future of human-tech cooperation. We are already seeing this future take shape. For instance, MarTech companies are testing AI-powered fraud detection to supplement the work that human experts do to monitor traffic quality and transparency. This not only eases the human workload but helps companies save resources while getting better results overall. Similar benefits of human-AI collaboration can be seen in healthcare, with AI that can be trained to assist patients with recovery treatments or perform routine tasks in medical offices or hospitals, freeing nurses and doctors up to focus on patient outcomes. 


Banking on Innovation: How Finance Transforms Technological Growth for Decision Makers

Regulation is a sensitive topic for the financial industry. While the need for a certain degree of oversight is universally accepted, excessive regulation can stifle the very innovation that drives economic growth. On the other hand, too little regulation can open the doors to risk accumulation and financial crises. Striking this balance is one of the most challenging tasks that government leaders face. Policies must be evidence-based, derived from transparent risk- assessment models and economic simulations. Regulatory sandboxes could offer a safe environment for financial institutions to experiment with new services and products under the watchful eye of regulators, thereby fostering innovation while ensuring compliance. ... One of the most potent ways in which PPPs can contribute to revenue management is through asset monetization. Governments often sit on a wealth of underutilized assets, ranging from real estate to utilities. A PPP can unlock the value of these assets by involving private-sector expertise and investment. 


Microsoft Releases Its Own Distro of Java 21

Microsoft’s continuing support for OpenJDK is a strong indicator of how important Java is in the enterprise software space. “And the new features of Java 21 such as lightweight threads are maintaining Java’s relevance in the cloud native age,” said Mike Milinkovich, executive director of the Eclipse Foundation. “Being one of the first vendors to ship Java SE 21 support shows how focused Microsoft is in meeting the needs of Java developers deploying workloads on Azure.” Also, Spring developers will be pleased to know that Spring Boot 3.2 now supports Java 21 features. Many other frameworks and libraries will soon release their JDK 21-supported versions. “Microsoft has some of the best developer tool makers in the world — to have them add Java to the mix makes sense,” said Richard Campbell, founder of Campbell & Associates. “Of course, that happened a couple of years ago, and JDK 21 is just the latest implementation. In the end, Microsoft wants to ensure that Azure is a great place to run Java, so having a team working on Java running in Azure helps to make that true. What does it mean for the ecosystem? More choices for implementations of Java, better Java tooling, and more places to run Java fast and securely.”


Why embracing complexity is the real challenge in software today

The reason we can’t just wish away or “fix” complexity is that every solution — whether it’s a technology or methodology — redistributes complexity in some way. Solutions reorganize problems. When microservices emerged (a software architecture approach where an application or system is composed of many smaller parts), they seemingly solved many of the maintenance and development challenges posed by monolithic architectures (where the application is one single interlocking system). However, in doing so microservices placed new demands on engineering teams; they require greater maturity in terms of practices and processes. This is one of the reasons why we cautioned people against what we call “microservice envy” in a 2018 edition of the Technology Radar, with CTO Rebecca Parsons writing that microservices would never be recommended for adoption on Technology Radar because “not all organizations are microservices-ready.” We noticed there was a tendency to look to adopt microservices simply because it was fashionable. This doesn’t mean the solution is poor or defective. 


Balancing Cost and Resilience: Crafting a Lean IT Business Continuity Strategy

Effective monitoring is the backbone of a resilient infrastructure. The approach should focus on: Filtering out the noise - Monitoring solutions need to ensure that only critical notifications are sent out, preventing information overload and ensuring that the right people are alerted promptly when critical events inevitably happen. Acting quickly and decisively - Time is of the essence during disruptions. IT, DevOps, SIRT, and even PR teams need to be well coordinated for various types of events. From security breaches to data center fires or even just mundane equipment failures, anything that might result in customer or operation disruptions will involve cross-team communications and collaboration. The only way to get better at handling these is to have documentation on what should be done, a clear chain of command, and practice drills. In conclusion, a comprehensive backup and recovery strategy is essential for businesses aiming for uninterrupted operations. While there are many solutions available in the market, it’s crucial to find one that aligns with your business needs. 


How do you solve a problem like payments infrastructure?

Today, banks need to be willing to adopt new technology to change, and this will involve working with a third-party service provider. Another roundtable participant added that as part of this process, it is imperative to utilise validation evaluation to recycle new enhancements. Otherwise, banks will end up with the belief that the improvements that were made are unique, but in fact, competitors will keep pace or even get ahead when it comes to the innovation game or enticing new customers. This banker revealed that they opted to not disconnect from their existing infrastructure, but instead chose a top layer architecture to process payments in a more efficient way. In line with this, the participant added that culture must be considered, because this is what brings together the different components that are needed and ultimately reveals when the time is right to change the systems. Providing background information, this Sibos attendee mentioned that 15 years ago, the bank considered whether it would be more cost effective to map local, regional, or global ISO 20022 messaging into existing architecture or to create a new platform that could work for the next 20 years. 


GenAI: friend or foe in fraud risk management?

Building high-performance fraud detection algorithms today is dependent on real-life customer and transaction data to train and validate the models, which has remained a constraint. GenAI can help with realistic synthetic data creation for model training and validation, scenario and fraud attack simulation to identify vulnerabilities and design controls to mitigate these risks. Customer due diligence (CDD) is a critical function in fraud prevention – be it new client onboarding or new credit approvals (loans, credit cards, increasing credit limits) for existing clients. GenAI can be a great tool to go through piles of KYC documentation and reference them with customer-filled forms and other subscribed data sources of the FI to come up with a CDD summary report. GenAI can also be used to analyse user communications with FIs – such as emails, chats, documents and product and service requests – to extract insights on financial behaviour, sentiment analysis for intentions and potential risks of fraud. Fraud investigations can also leverage GenAI for alert and dispute resolution by accessing different sources of information on the context and providing a summary of the case that will aid in its decisioning.


Weaving Cyber Resilience into the Strategic Fabric of Higher Education Institutions

There is no shortage of steps that institutions can take to bolster their cyber resilience and ensure that, should the worst happen, they’re prepared. A good place to start is by assessing the institution’s current level of resilience and looking for any gaps or obstacles. In many cases, Goerlich says, the key is simplification. For example, adopting a zero-trust security strategy can also improve a college or university’s ability to respond, maintain continuity and bounce back following an adverse event, he says. Another factor complicating resiliency for many institutions is overly complex network environments, particularly in the cloud. As colleges and universities clamor to embrace digital transformation and cloud networking, it’s not uncommon for their environments to grow to a degree that becomes unmanageable. But uncontrolled and unregulated cloud sprawl can have a serious impact on an institution’s resilience. Developing easy-to-follow approaches and processes — along with adopting simplified, automated and easy-to-use technology solutions — can make a significant difference, Goerlich says. 


How to make asynchronous collaboration work for your business

Asynchronous working can bring some benefits that synchronous work can't – most notably speed. “Real-time communication means everyone must be in the same place, or at least the same time zone, in order for work to happen. If workers need to wait for syncs to decide or act on something, it slows down the company as a whole and reduces its ability to compete,” says van der Voort. Asynchronous collaboration allows people to work at their own pace, and does not force them to wait for input from others. Morning people, evening people, midnight oil people, collaborating across geographies, can in some cases deliver higher quality results than forcing everyone to come together for a 10am video call. To get this working well, policies such as having core working hours for each staff member, and having very clear goals and anticipated outcomes for all meetings, can be incredibly useful. “One of the most significant and highly sought-after benefits asynchronous collaboration offers is a dramatic reduction in meetings,” argues Lawyer. “It allows team members to contribute in the least amount of minutes, freeing up time for other work.”


Securing the Evolution of Smart Home Applications

Very few in the cybersecurity community have forgotten one of the most noteworthy incidents, the Mirai Botnet, which took place back in 2017. Attackers behind the botnet infiltrated the site of well-known cybersecurity journalist Brian Krebs. The Distributed Denial of Service (DDoS) attack lasted for days, 77 hours to be exact. It involved 24,000 Mirai-infected Internet-of-Things devices, including personal surveillance cameras. Jumping ahead to this year 2023, in June the Federal Trade Commission (FTC) settled a case with Ring’s owner, Amazon. The online retailing giant agreed to pay the FTC nearly $31 million in penalties to settle recently filed federal lawsuits over privacy violations. The FTC alleged that Ring compromised customer privacy by allowing any employee or contractor to access consumers’ private videos. The FTC also claimed hackers used Ring cameras’ two-way functionality to harass and even physically threaten consumers – including children – if they did not pay a ransom. These types of incidents clearly illustrate how critical it is to secure devices like cameras in a smart home.



Quote for the day:

"Before you are a leader, success is all about growing yourself when you become a leader, success is all about growing others." -- Jack Welch