Daily Tech Digest - June 13, 2024

Backup lessons learned from 10 major cloud outages

So, what’s the most critical lesson here? Back up your cloud data! And I don’t just mean relying on your provider’s built-in backup services. As we saw with Carbonite, StorageCraft and OVH, those backups can evaporate along with your primary data if disaster strikes. You need to follow the 3-2-1 rule religiously: keep at least three copies of your data, on two different media, with one copy off-site. And in the context of the cloud, “different media” means not storing everything in the same type of system; use different failure domains. Also, “off-site” means in a completely separate cloud account or, even better, with a third-party backup provider. But it’s not just about having backups; it’s about having the right kind of backups. Take the StorageCraft incident, for example. They lost customer backup metadata during a botched cloud migration, rendering those backups useless. This hammers home the importance of not only backing up your primary data but also maintaining the integrity and recoverability of your backup data itself.


4 Ways to Control Cloud Costs in the Age of Generative AI

First and foremost, prioritize building a cost-conscious culture within your organization. IT professionals are presented with some serious challenges to get spending under control and identify value where they can. Educating teams on cloud cost management strategies and fostering accountability can empower them to make informed decisions that align with business objectives. Organizations are increasingly implementing FinOps frameworks and strategies in their cloud cost optimization efforts as well. This promotes a shared responsibility for cloud costs across IT teams, DevOps, and other cross-functional teams. ... Implementing robust monitoring and optimization tools is essential. By leveraging analytics and automation, your organization can gain real-time insights into cloud usage patterns and identify opportunities for optimization. Whether it's rightsizing resources, implementing cost allocation tags, or leveraging spot instances, proactive optimization measures can yield substantial cost savings without sacrificing performance.


Gen AI can be the answer to your data problems — but not all of them

One use case is particularly well suited for gen AI because it was specifically designed to generate new text. “They’re very powerful for generating synthetic data and test data,” says Noah Johnson, co-founder and CTO at Dasera, a data security firm. “They’re very effective on that. You give them the structure and the general context, and they can generate very realistic-looking synthetic data.” The synthetic data is then used to test the company’s software, he says. ... The most important thing to know is that gen AI won’t solve all of a company’s data problems. “It’s not a silver bullet,” says Daniel Avancini, chief data officer at Indicium, an AI and data consultancy. If a company is just starting on its data journey, getting the basics right is key, including building good data platforms, setting up data governance processes, and using efficient and robust traditional approaches to identifying, classifying, and cleaning data. “Gen AI is definitely something that’s going to help, but there are a lot of traditional best practices that need to be implemented first,” he says. 


Scores of Biometrics Bugs Emerge, Highlighting Authentication Risks

Biometrics generally are regarded as a step above typical authentication mechanisms — that extra James Bond-level of security necessary for the most sensitive devices and the most serious environments. ... The critical nature of the environments in which these systems are so often deployed necessitates that organizations go above and beyond to ensure their integrity. And that job takes much more than just patching newly discovered vulnerabilities. "First, isolate a biometric reader on a separate network segment to limit potential attack vectors," Kiguradze recommends. Then, "implement robust administrator passwords and replace any default credentials. In general, it is advisable to conduct thorough audits of the device’s security settings and change any default configurations, as they are usually easier to exploit in a cyberattack." "There have been recent security breaches — you've probably read about them," acknowledges Rohan Ramesh, director of product marketing at Entrust. But in general, he says, there are ways to protect databases with hardware security modules and other advanced encryption technologies.


Mastering the tabletop: 3 cyberattack scenarios to prime your response

The ransomware CTEP explores aspects of an organization’s operational resiliency and poses key questions aimed at understanding threats to an organization, what information the attacker leverages, and how to conduct risk assessments to identify specific threats and vulnerabilities to critical assets. Given that ransomware attacks focus on data and systems, the scenario asks key questions about the accuracy of inventories and whether there are resources in place dedicated to mitigating known exploited vulnerabilities on internet-facing systems. This includes activities such as not just having backups, but their retention period and an understanding of how long it would take to restore from backups if necessary, in events such as a ransomware attack. Questions asked during the tabletop also include a focus on assessing zero-trust architecture implementation or lack thereof. This is critical, given that zero trust emphasizes least-permissive access control and network segmentation, practices that can limit the lateral movement of an attack and potentially keep it from accessing sensitive data, files, and systems.


10 Years of Kubernetes: Past, Present, and Future

There is little risk (nor reason) that Wasm will in some way displace containers. WebAssembly’s virtues — fast startup time, small binary sizes, and fast execution — lend strongly toward serverless workloads where there is no long-running server process. But none of these things makes WebAssembly an obviously better technology for long-running server process that are typically encapsulated in containers. In fact, the opposite is true: Right now, few servers can be compiled to WebAssembly without substantial changes to the code. When it comes to serverless functions, though, WebAssembly’s sub-millisecond cold start, near-native execution speed, and beefy security sandbox make it an ideal compute layer. If WebAssembly will not displace containers, then our design goal should be to complement containers. And running WebAssembly inside of Kubernetes should involve the deepest possible integration with existing Kubernetes features. That’s where SpinKube comes in. Packaging a group of open source tools created by Microsoft, Fermyon, Liquid Reply, SUSE, and others, SpinKube plumbs WebAssembly support directly into Kubernetes. A WebAssembly application can use secrets, config maps, volume mounts, services, sidecars, meshes, and so on. 


Cultivating a High Performance Environment

At the organizational level, how is a culture that supports high performers put in place and how does it remain in place? The simple answer is that cultural leaders must set the foundation. A great example is Gary Vaynerchuk. As CEO of his organization, he embodies many high performing qualities we’ve identified as power skills. He is the primary champion (Sponsor) for this culture, hires leaders (resources) who make up a group of champions, and these leaders hire others (teams) who expand the group of champions. Tools, tactics, and processes are put in place by all champions at all levels to support, build, and maintain the culture. Those who don’t resonate with high performance are supported as best and as long as possible. If they decide not to support the culture, they are facilitated to leave in a supportive manner. As organizations change and embrace true high performance (power skills), authentic high performers will proliferate. Organizations don’t really have a choice about whether to move to the new paradigm. This is the way now and of the future. Steve Jobs said it well: “We don’t hire experts to tell them what to do. We hire experts to tell us what to do.” 


Top 10 Use Cases for Blockchain

Smart contracts on the blockchain can also automate derivate contract execution based on pre-defined rules while automating dividend payments. Perhaps most notable, is its ability to tokenise traditional assets such as stocks and bonds into digital securities – paving the way for fractional ownership. ... Blockchain can also power CBDCs – a digital form of central bank money that offers unique advantages for central banks at retail and wholesale levels, from enhanced financial access for individuals to greater infrastructural efficiency for intermediate settlements. With distributed ledger transactions (DLT), CBDCs can be issued, recorded and validated in a decentralised way. ... Blockchain technology is becoming vital in the cybersecurity space too. When it comes to digital identities, blockchain enables the concept of self-sovereign identity (SSI), where individuals have complete control and ownership over their digital identities and personal data. Rather than relying on centralised authorities like companies or governments to issue and manage identities, blockchain enables users to create and manage their own.


Encryption as a Cloud-to-Cloud Network Security Strategy

Like upper management, there are network analysts and IT leaders who resist using data encryption. They view encryption as overkill—in technology and in the budget. Second, they may not have much first-hand experience with data encryption. Encryption uses black-box arithmetic algorithms that few IT professionals understand or care about. Next, if you opt to use encryption, you have to make the right choice out of many different types of encryption options. In some cases, an industry regulation may dictate the choice of encryption, which simplifies the choice. This can actually be a benefit on the budget side because you don't have to fight for new budget dollars when the driver is regulatory compliance. However, even if you don't have a regulatory requirement for the encryption of data in transit, security risks are growing if you run without it. Unencrypted data in transit can be intercepted by malicious actors for purposes of identity theft, intellectual property theft, data tampering, and ransomware. The more companies move into a hybrid computing environment that operates on-premises and in multiple clouds, the greater their risk since more data that is potentially unprotected is moving from point to point over this extended outside network.


Automated Testing in DevOps: Integrating Testing into Continuous Delivery

Automated testing skilfully diverts ownership responsibilities to the engineering team. They can prepare test plans or assist with the procedure alongside regular roadmap feature development and then complete the execution using continuous integration tools. With the help of an efficient automation testing company, you can reduce the QA team size and let quality analysts focus more on vital and sensitive features. ... The major goal of continuous delivery is to deliver new code releases to customers as fast as possible. Suppose there is any manual or time-consuming step within the delivery process. In that case, automating delivery to users becomes challenging rather than impossible. Continuous development can be an effective part of a greater deployment pipeline. It is a successor to and also relies on continuous integration. Continuous integration is entirely responsible for running automated tests against new code changes and verifying whether new changes are breaking new features or introducing new bugs. Continuous delivery takes place once the CI step passes the automated test plan.



Quote for the day:

"If you really want the key to success, start by doing the opposite of what everyone else is doing." -- Brad Szollose

Daily Tech Digest - June 11, 2024

4 reasons existing anti-bot solutions fail to protect mobile APIs

Existing anti-bot solutions attempt to bend their products to address mobile-based threats. For example, some require the implementation of an SDK into the mobile app, because that’s the only way the mobile app can respond to the main methods used by WAFs to identify bots from humans. Such solutions also typically require separate servers to be deployed behind the WAF, which are used to evaluate connection requests to discern legitimate connections from malicious ones. These “workarounds” impose single points of failure, performance bottlenecks, and latency, and often come with unacceptable capacity limitations. On top of that, WAF mobile SDKs also have limitations in terms of the dev framework support and can require developers to rewrite the network stack to achieve compatibility with the WAF. Such workarounds create more work and more costs. To make matters worse, because most anti-bot solutions on the market are not sufficiently hardened to protect against clones, spoofing, malware, or tampering, hackers can easily compromise, bypass, or disable the anti-bot solution if it’s implemented inside a mobile app that is not sufficiently protected against reverse engineering and other attacks.


Advancing interoperability in Africa: Overcoming challenges for digital integration

From a legal perspective, Mihret Woodmatas, senior ICT expert, department of infrastructure and energy, African Union Commission (AUC), points out that differing levels of development across countries pose a challenge. A significant issue is the lack of robust legal frameworks for data protection and privacy. ... Hopkins underscores the importance of sharing data to benefit those it is collected for, particularly refugees. While sharing data comes with risks, particularly concerning security and privacy, these can be managed with proper risk treatments. The goal is to avoid siloed data systems and instead foster coordination and cooperation among different entities. Hopkins discussed the digital transformation across states and international agencies, emphasizing the need for effective data sharing. Good data sharing practices enable various entities to provide coordinated services, significantly benefiting refugees by facilitating their access to education, healthcare, and employment. Interoperability also supports local communities economically and ensures a unique and continuous identity for refugees, even if they remain displaced for years or decades. 


Cloud migration expands the CISO role yet again

CISOs must now ensure they can report to the SEC within four business days of determining an incident’s materiality, describing its nature, scope, and potential impact. They must also communicate risk management strategies and incident response plans to ensure the board is well-informed about the organization’s cybersecurity posture. These changes require a more structured and proactive approach because CISOs must now be aware of compliance status in near real-time, not only to provide all cybersecurity incident data and context to the board, compliance teams, and finance teams, but to ensure they can determine quickly whether an incident has a material impact and therefore must be reported to the SEC. CISOs who miss making a timely disclosure or have the wrong security and compliance strategy in place can expect to be fined, even if the incident doesn’t turn into a catastrophic cybersecurity event. Boards must be able to trust that CISOs can answer any question related to compliance and security quickly and accurately, and the board themselves must be familiar with cybersecurity concepts, able to understand the risks and ask the right questions.


Generative AI Is Not Going To Build Your Engineering Team For You

People act like writing code is the hard part of software. It is not. It never has been, it never will be. Writing code is the easiest part of software engineering, and it’s getting easier by the day. The hard parts are what you do with that code—operating it, understanding it, extending it, and governing it over its entire lifecycle. A junior engineer begins by learning how to write and debug lines, functions, and snippets of code. As you practice and progress towards being a senior engineer, you learn to compose systems out of software, and guide systems through waves of change and transformation. Sociotechnical systems consist of software, tools, and people; understanding them requires familiarity with the interplay between software, users, production, infrastructure, and continuous changes over time. These systems are fantastically complex and subject to chaos, nondeterminism and emergent behaviors. If anyone claims to understand the system they are developing and operating, the system is either exceptionally small or (more likely) they don’t know enough to know what they don’t know. Code is easy, in other words, but systems are hard.


Is Oracle Finally Killing MySQL?

Things have changed, though, in recent years with the introduction of “MySQL Heatwave”—Oracle’s MySQL Cloud Database. Heatwave includes a number of features that are not available in MySQL Community or MySQL Enterprise, such as acceleration of analytical queries or ML functionality. When it comes to “analytical queries,” it is particularly problematic as MySQL does not even have parallel query execution. At a time when CPUs with hundreds of cores are coming to market, those cores are not getting significantly faster, which is increasingly limiting performance. This does not just apply to queries coming from analytical applications but also simple “group by” queries common in operational applications. Note: MySQL 8 does have some parallelization support for DDLs but not for queries. Could this have something to do with giving people more reason to embrace MySQL Heatwave? Or, rather move to PostgreSQL or adopt Clickhouse? Vector Search is another area where open source MySQL lacks. While every other major open source database has added support for Vector Search functionality, and MariaDB is working on it, having it as a cloud-only MySQL Heatwave Feature in the MySQL ecosystem is unfortunate, to say the least.


Giant legacies

Thought leadership in general demands we stand on the shoulders of innovators who have gone before. Thinking in HR is no exception. The essence of this debt was captured in the Hippocratic Oath this column had proposed for HR professionals: "I shall not forget the debt and respect I owe to those who have taught me and freely pass on the best of my learnings to those who work with me as well as through professional bodies, educational institutes or other means of dissemination. ... Thinking brilliant new concepts or applying those that have taken root in one field to another is necessary but not sufficient for creating a LOG. There are two other tests. If the concept, strategy or process proves its worth, it should be lasting. It need not become an unchangeable sacrament but further developments should emanate from it rather than demand a reversal of the flow. While we can sympathize with radical ideas (or greedy cats) that are brought to a dead end by 'malignant fate', we cannot honour them as LOGs. Apart from durability over time, we have transmission across organisational boundaries which establishes the generalizability of the innovation. 


Solving the data quality problem in generative AI

One of the biggest misconceptions surrounding synthetic data is model collapse. However, model collapse stems from research that isn’t really about synthetic data at all. It is about feedback loops in AI and machine learning systems, and the need for better data governance. For instance, the main issue raised in the paper The Curse of Recursion: Training on Generated Data Makes Models Forget is that future generations of large language models may be defective due to training data that contains data created by older generations of LLMs. The most important takeaway from this research is that to remain performant and sustainable, models need a steady flow of high-quality, task-specific training data. For most high-value AI applications, this means fresh, real-time data that is grounded in the reality these models must operate in. Because this often includes sensitive data, it also requires infrastructure to anonymize, generate, and evaluate vast amounts of data—with humans involved in the feedback loop. Without the ability to leverage sensitive data in a secure, timely, and ongoing manner, AI developers will continue to struggle with model hallucinations and model collapse.


DevSecOps Made Simple: 6 Strategies

Collective Responsibility describes the common practices shared by organizations that have taken a program-level approach to security culture development. Broken into three key areas: 1) executive support and engagement, 2) program design and implementation, 3) program sustainment and measurement, the paper suggests how to best garner (and keep) executive support and engagement while building an inclusive cultural program based on cumulative experience. ... Collaboration and Integration addresses the importance of integrating DevSecOps into organizational processes and stresses the key role that fostering a sense of collaboration plays in its successful implementation. ... Pragmatic Implementation outlines the practices, processes, and technologies that organizations should consider when building out any DevSecOps program and how to implement DevSecOps pragmatically. ... Bridging Compliance and Development is broken into three parts offering 1) an approach to compartmentalization and assessment with an eye to minimizing operating impact, 2) best practices on how compliance can be designed and implemented into applications, and 3) a look at the different security tooling practices that can provide assurance to compliance requirements.


Change Management Skills for Data Leaders

Strategic planning and decision-making are pivotal aspects of successful organizational transformation, requiring nuanced change management skills. Developing a strategy for organizational change in Data Management is a critical task that requires an understanding of both the current state of affairs and the desired future state. For data leaders, this involves conducting a thorough assessment to identify gaps between these two states. ... Developing effective communication and collaboration strategies is paramount in navigating the complexities of change management. A key component of this process involves crafting clear, concise, and transparent messaging that resonates with all stakeholders involved. This ensures that everyone, from team members to top-level management, understands not only the nature of the change but also its purpose and the benefits it promises to bring. ... Resilience is not just about enduring change but also about emerging stronger from it. Data leaders are often at the forefront of navigating through uncharted territories, be it technological advancements or market shifts, which requires an inherent ability to withstand pressure and bounce back from setbacks. 


Sanity Testing vs. Regression Testing: Key Differences

Sanity testing is the process that evaluates the specific software application functionality after its deployment with added new features or modifications and bug fixes. In simple terms, it is the quick testing to check whether the changes made are as per the Software Requirement Specifications (SRS). It is generally performed after the minor code adjustment to ensure seamless integration with existing functionalities. If the sanity test fails, it's a red flag that something's wrong, and the software might not be ready for further testing. This helps catch problems early on, saving time and effort down the road. ... Regression testing is the process of re-running tests on existing software applications to verify that new changes or additions haven't broken anything. It's a crucial step performed after every code alteration, big or small, to catch regressions – the re-emergence of old bugs due to new changes. By re-executing testing scenarios that were originally scripted when known issues were initially resolved, you can ensure that any recent alterations to an application haven't resulted in regression or compromised previously functioning components.



Quote for the day:

"The two most important days in your life are the day you are born and the day you find out why." --Mark Twain

Daily Tech Digest - June 10, 2024

AI vs humans: Why soft skills are your secret weapon

AI can certainly assist with some aspects of the creative process, but true creativity is something only humans can achieve, for several reasons. Firstly, it often involves intuition, emotion and empathy, as well as thinking outside the box and making connections between seemingly unrelated concepts. Creativity is often shaped by personal experiences and cultural background, making every individual’s creative work unique. ... Leadership and strategic management will continue to be driven by humans. When making decisions, people are able to consider various factors such as personal relationships or company culture. General awareness, intuition, understanding of broader contexts that lie beyond data and effective communication skills are all human traits. ... Humans possess a crucial trait that AI is unable to replicate (although it’s definitely coming closer): Empathy. AI can’t communicate with your team members at the same level, provide solutions to their problems or offer a listening ear when necessary. Managing a team means talking to people, listening and understanding their needs and motivations. The human touch is essential to make sure that everyone is on the same page. 


How to Avoid Pitfalls and Mistakes When Coding for Quality

When code quantity is so exaggerated that redundancies emerge, "code bloat" occurs. An abundance of unnecessary code can adversely affect the site's performance and the code can become too complex to maintain. There are strategies for addressing redundancy; however, as code is implemented, it is crucial for it to be modularized or broken down into smaller modular complements with the proper encapsulation and extraction. Code that is modularized promotes reuse, simplifies maintenance, and keeps the size of the code base in check. ... There is a tendency to "reinvent the wheel" when writing code. A more practical solution is to reuse libraries whenever possible because they can be utilized within different parts of the code. Sometimes, code bloat results from a historically bloated code base without an easy option to conduct modularization, extraction, or library reuse. In this case, the most effective strategy is to turn to code refactoring. Regularly take initiatives to refactor code, eliminate any unnecessary or duplicate logic, and improve the overall code structure of the repository over time. 


The BEC battleground: Why zero trust and employee education are your best line of defence

Even with extensive employee training, some BEC scams can bypass human vigilance. Comprehensive security processes are essential to minimize their impact. The zero-trust security model is crucial here. It assumes no inherent trust for anyone, inside or outside the network. With zero trust, every user and device must be continuously authenticated before accessing any resources. This makes it much harder for attackers. Even if they steal a login credential, they can’t automatically access the entire system. A key component of zero trust is multi-factor authentication (MFA) which acts as multiple locks on every access point. Just like a physical security system requiring multiple forms of identification, MFA requires not just a username and password, but an additional verification factor like a code from a phone app or fingerprint scan. This makes unauthorised entry, including through BEC scams, much harder. So, any IT infrastructure implemented must have zero trust and MFA at its core. A complement to zero trust is the principle of least privilege access; granting users only the minimum level of access required to perform their jobs. 


Why CISOs need to build cyber fault tolerance into their business

For a rapidly evolving technology like GenAI, it is impossible to prevent all attacks at all times. The ability to adapt to, respond, and recover from inevitable issues is critical for organizations to explore GenAI successfully. Therefore, effective CISOs are complementing their prevention-oriented guidance for GenAI with effective response and recovery playbooks. Regarding third-party cybersecurity risk management, no matter the cybersecurity function’s best efforts, organizations will continue to work with risky third parties. Cybersecurity’s real impact lies not in asking more due diligence questions, but in ensuring the business has documented and tested third-party-specific business continuity plans in place. “CISOs should be guiding the sponsors of third-party partners to create a formal third-party contingency plan, including things like an exit strategy, alternative suppliers list, and incident response playbooks,” said Mixter. “CISOs tabletop everything else. It’s time to bring tabletop exercises to third-party cyber risk management.”


AI system poisoning is a growing threat — is your security regime ready?

CISOs shouldn’t breathe a sigh of relief, McGladrey says, as their organizations could be impacted by those attacks if they are using the vendor-supplied corrupted AI systems. ... Security experts and CISOs themselves say many organizations are not prepared to detect and respond to poisoning attacks. “We’re a long way off from having truly robust security around AI because it’s evolving so quickly,” Stevenson says. He points to the Protiviti client that suffered a suspected poisoning attack, noting that workers at that company identified the possible attack because its “data was not synching up, and when they dived into it, they identified the issue. [The company did not find it because] a security tool had its bells and whistles going off.” He adds: “I don’t think many companies are set up to detect and respond to these kinds of attacks.” ... “The average CISO isn’t skilled in AI development and doesn’t have AI skills as a core competency,” says Jon France, CISO with ISC2. Even if they were AI experts, they would likely face challenges in determining whether a hacker had launched a successful poisoning attack.


Accelerate Transformation Through Agile Growth

The problem is that when you start the next calendar year in January, you get a false sense of confidence because December is still 12 months away — all the time in the world, or so it seems, to execute your annual strategic plan. But then by April, after the first quarter has ended, chances are you’ll have started to feel a bit behind. You won’t be overly worried, however; you know you still have plenty of time to catch up. But then you’ll get to September and hit the 100-day-sprint which typically comes right after Labor Day in the United States. Now, panic will set in as you race to the end of the year desperately trying to hit those annual goals that were established all the way back in January. In growth cycles longer than 90 days, we tend to get off track. But it doesn’t have to be this way. You can use the 90-Day Growth Method to bring your team together every quarter to review and celebrate your progress over the past 90 days, refocus on goals and actions, and renew your commitment to achieving them. Soon, you and your team will feel re-energized and ready to move forward with courage and confidence for the next 90 days.


We need a Red Hat for AI

To be successful, we need to move beyond the confusing hype and help enterprises make sense of AI. In other words, we need more trust (open models) and fewer moving parts ... OpenAI, however popular it may be today, is not the solution. It just keeps compounding the problem with proliferating models. OpenAI throws more and more of your data into its LLMs, making them better but not any easier for enterprises to use in production. Nor is it alone. Google, Anthropic, Mistral, etc., etc., all have LLMs they want you to use, and each seems to be bigger/better/faster than the last, but no clearer for the average enterprise. ... You’d expect the cloud vendors to fill this role, but they’ve kept to their preexisting playbooks for the most part. AWS, for example, has built a $100 billion run-rate business by saving customers from the “undifferentiated heavy lifting” of managing databases, operating systems, etc. Head to the AWS generative AI page and you’ll see they’re lining up to offer similar services for customers with AI. But LLMs aren’t operating systems or databases or some other known element in enterprise computing. They’re still pixie dust and magic.


How Data Integration Is Evolving Beyond ETL

From an overall trend perspective, with the explosive growth of global data, the emergence of large models, and the proliferation of data engines for various scenarios, the rise of real-time data has brought data integration back to the forefront of the data field. If data is considered a new energy source, then data integration is like the pipeline of this new energy. The more data engines there are, the higher the efficiency, data source compatibility, and usability requirements of the pipeline will be. Although data integration will eventually face challenges from Zero ETL, data virtualization, and DataFabric, in the visible future, the performance, accuracy, and ROI of these technologies have always failed to reach the level of popularity of data integration. Otherwise, the most popular data engines in the United States should not be SnowFlake or DeltaLake but TrinoDB. Of course, I believe that in the next 10 years, under the circumstances of DataFabric x large models, virtualization + EtLT + data routing may be the ultimate solution for data integration. In short, as long as data volume grows, the pipelines between data will always exist.


Protecting your digital transformation from value erosion

The first form of value erosion pertains to cost increases within your project without an equivalent increase in the value or activities being delivered. With project delays, for example, there are usually additional costs incurred related to resource carryover because of the timeline increase. In this instance, the absence of additional work being delivered, or future work being pulled forward to offset the additional costs, is a prime illustration of value erosion. ... Decrease in value without decreased costs: A second form occurs when there’s a decrease in value without a cost adjustment. This can happen due to changing business priorities or project delays, especially within the build phase. As an alternative to extending the project timeline, organizations may decide to prioritize and reduce features to meet deadlines. ... Failure to Identify and plan for potential risks leaves projects vulnerable to unforeseen complications and budgetary concerns. Large variances in initial SI responses can be attributed to different assumptions on scope and service levels provided. 


Ask a Data Ethicist: What Is Data Sovereignty?

Put simply, data sovereignty relates to who has the power to govern data. It determines who is legally empowered to make decisions about the collection and use of data. We can think about this in the context of two governments negotiating between each other, each having sovereign powers of self-determination. Indigenous governments are claiming their sovereign rights to their people’s data. On the one hand, this is a response to the atrocities that have taken place with respect to data gathered and taken beyond the control of Indigenous communities by researchers, governments, and other non-Indigenous parties. Yet, as data becomes increasingly important, many countries are seeking to set regulatory standards for data. It makes sense the Indigenous governments would assert similar rights with respect to their people’s data. ... Data sovereignty is an important part of Canada’s Truth and Reconciliation calls to action. The FNIGC governs the relevant processes for those seeking to work with First Nations in Canada to appropriately access data.



Quote for the day:

"The secret to success is good leadership, and good leadership is all about making the lives of your team members or workers better." -- Tony Dungy

Daily Tech Digest - June 09, 2024

AI Systems Are Learning to Lie and Deceive

Put another way, as Park explained in a press release: "We found that Meta’s AI had learned to be a master of deception." "While Meta succeeded in training its AI to win in the game of Diplomacy," the MIT physicist said in the school's statement, "Meta failed to train its AI to win honestly." In a statement to the New York Post after the research was first published, Meta made a salient point when echoing Park's assertion about Cicero's manipulative prowess: that "the models our researchers built are trained solely to play the game Diplomacy." Well-known for expressly allowing lying, Diplomacy has jokingly been referred to as a friendship-ending game because it encourages pulling one over on opponents, and if Cicero was trained exclusively on its rulebook, then it was essentially trained to lie. Reading between the lines, neither study has demonstrated that AI models are lying over their own volition, but instead doing so because they've either been trained or jailbroken to do so. That's good news for those concerned about AI developing sentience — but very bad news if you're worried about someone building an LLM with mass manipulation as a goal.


AI search answers are the fast food of your information diet – convenient and tasty

These AI features vacuum up information from the internet and other available sources and spit out an answer based on how they are trained to associate words. A core argument against them is that they mostly remove from the equation the user’s judgment, agency and opportunity to learn. This may be OK for many searches. Want a description of how inflation has affected grocery prices in the past five years, or a summary of what the European Union AI Act includes? AI Overviews can be a good way to cut through a lot of documents and extract those specific answers. But people’s searching needs don’t end with factual information. They look for ideas, opinions and advice. Looking for suggestions about how to keep the cheese from sliding off your pizza? Google will tell you that you should add some glue to the sauce. Or wondering if running with scissors has any health benefits? Sure, Google will say, “it can also improve your pores and give you strength”. While a reasonable user can understand that such outrageous answers are likely to be wrong, it’s hard to detect that for factual questions.


Future of biometric payments and digital ID taking shape

Japan is adding support for digital wallets to its My Number national digital ID system, starting with Apple Wallet next spring. The passage of a new law updates the My Number system enables this first step for Apple Wallet IDs outside of the U.S. Lawmakers envisage the use of My Numbers on smartphones for a wide range of public and private sector interactions. ... eEstonia Digital Transformation Adviser Erika Piirmets tells Biometric Update in an interview that the country’s mature digital government makes high uptake of its EU Digital Identity Wallet quite likely. Piirmets explains the evolving ecosystem of ID credentials in Estonia, the country’s work to enable cross-border interoperability, and ongoing work to bring electronic voting to mobile devices. ... Mobile driver’s licenses represent a major opportunity to move towards decentralized digital identity, but a panel at Identiverse hosted by OpenID Foundation notes that complex standards need to be orchestrated. For mDLs and digital wallets to be adopted, they need to be interoperable. Wallets need to be trusted by issuers, and relying parties need to be trusted by wallets.


How to Build The Entrepreneurial Spirit

Open time in employees' schedules for creative thinking. Cutting one hour of a redundant meeting for a 10-person team could yield 10 hours of individual employee exploration time. Don't shoot down ideas if employees can't make a strong business case yet. New and creative products lack the market data to back up the innovation. Give people the space and time to experiment and collect evidence. Blocking out time for "innovation days" is also becoming a common strategy. ... Employees also hesitate to take risks when they fear failure will earn them negative feedback or believe that playing it safe is more likely to lead to a promotion. At truly entrepreneurial companies, employees feel confident that risk-taking, within certain bounds, will be accepted and even rewarded. The Indian conglomerate Tata Group, for example, gives a "Dare to Try" award for brave attempts at unsuccessful innovations. ... Encourage employees to reflect on what innovations they might pursue and where their specific talents may be most helpful. Perhaps there's an opportunity to apply existing expertise to a longstanding problem or to bring prior insights to a new domain with an unexpected connection.


Why SAST + DAST can't be enough

These hybrid techniques highlight the fact that the dichotomic approach to application security offered by SAST/DAST is quickly being deprecated. Having two big security staples stretched out over the SDLC is not enough to be able to adapt to the new threats’ categories around software code. In fact, when it comes to hardcoded credentials, a whole new aspect has to be taken into account. ... Where SAST fails to convey the idea of probable “dormant” threats inside the git history, new concepts and methods need to emerge. This is why at GitGuardian we believe secrets detection deserves its very own category, and work towards raising the general awareness around its benefits. Older concepts are too narrow to encompass these new and actively exploited kinds of threats. But that’s not the end of the story: code reviews are a flawed mechanism, too. ... Both are still necessary, but no more sufficient to shield application security from vulnerabilities. Unfortunately, the security landscape is moving at a great pace and the proliferation of intricate concepts makes it sometimes difficult to grasp the definitive scope of action and limitations of some tools.


Should you use BitLocker on Windows?

it's generally good to have BitLocker enabled, especially for fixed drives on your PC. However, if you have drives that move between different PCs, BitLocker may be a problem because it's only available on Windows. That means that not only do you have to use a Windows PC to encrypt your drive, but only Windows PCs can decrypt it, so you won't be able to read your data on a separate computer. BitLocker may also cause some hassle if you're trying to access recovery options on your PC or the computer no longer boots for some reason. In order to access a BitLocker-encrypted drive without your usual password, you'll need to have the recovery key, which you may not always have handy. That's not exactly a problem, but it's something to be aware of. ... So BitLocker is generally good and you should have it enabled, but there's no need to scramble if you haven't done it before. Windows 11 will encrypt the fixed drives on your PC by default, so you're already set on that front unless you want to disable it for one of the reasons mentioned above.


Proposed EU Chat Control law wants permission to scan your WhatsApp messages

The key here is the 'user consent' clause. That's the way to make the scanning of privately shared multimedia files not an obligation but a choice. How they plan to do so resembles more to blackmail, however. As we mentioned, if you want to share a photo, video, or URL with your friend on WhatsApp you must give consent, or just stick to texting, calls, and vocal messages. Commenting on this point, Digneaux said: "There is no consent. There is no choice. If innocent users don’t agree to let the authorities snoop on their messages, emails, photos, and videos they will simply be cut off from the modern world." Proton isn't alone in feeling this way. A group of over 60 organizations—including Proton, Mozilla, Signal, Surfshark, and Tuta, alongside 50+ individuals, signed a joint statement to voice their concerns against the new proposal. Coerced consent is not freely given consent," wrote the group. "If the user has no real choice, feels compelled to consent, or would defacto be barred from the service if they do not consent, then the consent given will not be freely given."


AI Gateways vs. API Gateways: What’s the Difference?

Most organizations today consume AI outputs via a third-party API, either from OpenAI, Hugging Face or one of the cloud hyperscalers. Enterprises that actually build, tune and host their own models also consume them via internal APIs. The AI gateway’s fundamental job is to make it easy for application developers, AI data engineers and operational teams to quickly call up and connect AI APIs to their applications. This works in a similar way to API gateways. That said, there are critical differences between API and AI gateways. For example, the computing requirements of AI applications are very different from computing requirements of traditional applications. Different hardware is required. Training AI models, tuning AI models, adding additional specialized data to them and querying AI models each might have a different performance, latency or bandwidth requirement. The inherent parallelism of deep learning or real-time response requirements of inferencing may call for different ways to distribute AI workloads. Measuring how much an AI system is consuming can also require a specialized understanding of tokens and model efficiency.


Dispelling the disillusion – demystifying the digital twin

While digital twins certainly have a part to play when it comes to building development and construction, historically this has seen some overengineered, one-size-fits-all solutions, which often have not been best suited to the task at hand. This initial strategy raised expectations that the digital twin would solve all the client’s problems, only to ultimately underperform and disappoint. At Aecom, digital twins are developed as an ecosystem of different data sources, brought together harmoniously to provide a solution that prioritises resolving the use case or specific business need – moving away from multipurpose, off-the-rack to a more tailored, quick-time-to-value approach. Achieving value for the end user comes from determining what interface is required to provide the information they need, reducing things to their simplest components to address the use case at hand. By starting light, with a vision and long-term strategy in place, you can continue to build up and iterate your digital ecosystem where you can keep plugging in new technologies and integrating data sources, allowing it to grow and develop over time.


Feds Issue Alerts for Flaws in 2 Baxter Medical Devices

Many currently deployed medical device products in use today simply did not have sufficient security testing from their manufacturers - "full stop," said David Brumley, CEO of security firm ForAllSecure and cybersecurity professor at Carnegie Mellon University. While the Food and Drug Administration has a list of new cybersecurity expectations from manufacturers seeking premarket approval for their new medical devices, that intensified FDA review - empowered by Congress - is less than two years old. "The new FDA guidance is only 'premarket,' meaning it's only for new devices that have not been fielded. Everything out there already deployed hasn't had sufficient security testing, and that's security debt we're seeing catch up with us now," Brumley said. The FDA needs to provide stronger regulatory scrutiny and guidance for "currently fielded devices meeting modern security standards, not just premarket devices," Brumley said. "We also need the FDA to be more prescriptive, not less prescriptive. Putting it on the hospitals is the wrong place; it's like asking you to change how you drive your car while flying down the freeway at 80 miles per hour to fix a vendor issue."



Quote for the day:

"The whole point of getting things done is knowing what to leave undone." -- Lady Stella Reading

Daily Tech Digest - June 08, 2024

Understanding Security's New Blind Spot: Shadow Engineering

Shadow engineering leaves security teams with little or no control over LCNC apps that citizen developers can deploy. These apps also bypass the usual code tests designed to flag software vulnerabilities and misconfigurations, which could lead to a breach. This lack of visibility prevents organizations from enforcing policies to keep them in compliance with corporate or industry security standards. ... LCNC apps have many of the same problems found in conventionally developed software, such as hard-coded or default passwords and leaky data. A simple application asking employees for their T-shirt size for a company event could give hackers access to their HR files and protected data. LCNC apps should routinely be evaluated for threats and vulnerabilities, so they can be detected and remediated. ... Give citizen developers guidance in easy-to understand terms to help them remediate risks themselves as quickly and easily as possible. Collaborate with business developers to ensure that security is integrated into the development process of LCNC applications going forward.


‘Technology must augment humanity’: An interview with former IBM CEO Ginni Rometty

While we can't control disruptions, we can control our outlook on the future. Leaders must instill confidence in their teams, emphasising the inevitability of change and the collective ability to find positive solutions. Honesty is a form of optimism, so be honest with yourself and your teams about the issues at hand, resisting attempts to ignore or minimise them. ... Problem-solving is at the core of leadership, so leaders should be unafraid to ask questions, seek insights from others, and involve their teams and wider network in finding solutions. Remember, you do not have to tackle everything alone or have all the answers. When I face a complex problem, I dissect it into manageable pieces and think through each disparate part. ... The right relationships in your life, personal and professional, provide perspective and ideas which is essential for progress. Building a robust network—from friends and family to colleagues and industry peers—provides support and inspiration to maintain optimism and courage amid disruption. The more diverse your network, the more people you can call on to fuel your optimism and courage in the face of disruption.


How Cybersecurity and Sustainability Intersect

Cybersecurity and sustainability are discrete functions in many enterprises, yet they could benefit greatly from being de-siloed. Sustainability and cybersecurity initiatives need C-suite awareness and resources to permeate an enterprise’s culture and actually achieve their goals. “It's not a one-person show anymore. It's really an ownership in that responsibility and a stewardship that cuts across functional leadership across … the entire organization,” says Lynch. In more mature organizations, cybersecurity already has board-level involvement, which can make it easier to see and act on its intersection with sustainability. But for many organizations, cybersecurity and sustainability are separate and even back-office functions. “The cybersecurity leader should not wait for someone to come [and] invite them into these conversations,” says Govindankutty. The stakeholders who need to be involved in cybersecurity and sustainability extend beyond an enterprise’s four walls. Third-party vendors are a vital part of an enterprise’s ecosystem.


Flipping The Script On Startup Success

The first step is to identify the narrowly defined vertical market segments that the company will focus on. The second step is to find a lighthouse customer or two to focus all the team’s attention on to define the minimum viable product (MVP). That is iterative as the customer and the product team go back and forth with features that are must-haves. Then the startup team tests that candidate MVP with a few other customers. ... If you ask any experienced entrepreneur, investor or board member what the most important thing a startup CEO must stay on top of is, it’s to know at all times how much cash they have, what the monthly burn rate is and how long the runway is before cash runs out. Many mistakes are excusable and recoverable, but running out of cash by surprise is neither. ... Culture is not pizza and beer on Fridays, foosball tables or little rooms filled with toys. It is about the values of the company and how they are espoused. It is about the tone the CEO sets and how they communicate with all of their constituents. And the importance of culture is not not just about company morale, although that is very important. It is about attracting and retaining the best talent. While it might be nice to think you can put this off while focusing on the first four things, you would be wrong.


Empowering Developers to Harness Sensor Data for Advanced Analytics

Data from sensors offers a treasure trove of insights from the physical world for data scientists. From tracking temperature fluctuations in a greenhouse to analyzing the vibrations of industrial machines in a manufacturing plant, these tiny devices capture crucial information that can be used for groundbreaking research and development. The journey from collecting raw sensor data to actionable analysis can be riddled with stumbling blocks, as the realities of hardware components and environmental conditions come into play. The typical approach to sensor data capture often involves a cumbersome workflow across the various teams involved, including data scientists and engineers. While data scientists meticulously define sensor requirements and prepare their notebooks to process the information, engineers deal with the complexities of hardware deployment and software updates that reduce the scientists’ ability to quickly adjust these variables on the fly. This creates a long feedback loop that delays the pace of innovation across the organization.


To lead a technology team, immerse yourself in the business first

When asked to rank the defining characteristics of a leading CIO, respondents were split between the conventional and contemporary, saying the traditional, more IT-centric qualities are just as important as the strategic and more customer-focused ones. While aligning tech vision and strategy with the business has been the role of CIOs and technology leaders for some time, the scope of their duties now extends deeper into the business itself. "Establishing and managing a tech vision isn't enough," said DiLorenzo. "Today's CIOs need to own all the various technology uses across their organizations and ensure they're actively coordinating and orchestrating their fellow tech leaders -- as well as their business peers -- to co-create a vision and tech strategy that aligns with, and furthers, the overall enterprise strategy." Getting to a leadership position also requires immersing oneself in the business, Shaikh advised. "Business acumen, which includes understanding various business functions and industry dynamics, can be cultivated by spending time in business units," she said. "This understanding is crucial for strategic thinking, to help identify opportunities where technology can impact goals."


The unseen gen AI revolution on the AI PC and the edge

The shift towards edge and PC-based AI is not without its challenges. Privacy and security concerns are paramount, as devices become more autonomous and capable of processing sensitive data. Companies must focus on privacy and AI ethics to be the cornerstone of their approach, ensuring that as AI becomes more integrated into our devices, it does so in a manner that respects user privacy and trust. Moreover, the energy efficiency of AI workloads is a critical consideration, especially for battery-powered devices. Advancements in low-power, high-performance processors are pivotal in addressing this challenge, ensuring that the benefits of gen AI are not offset by decreased device longevity or increased environmental impact. Intel’s OpenVINO toolkit further enhances these benefits by optimizing deep learning models for fast, efficient performance across Intel’s hardware portfolio. This optimization enables customers to deploy AI applications more widely, even in resource-constrained environments, without sacrificing performance. As we enter this new era, the way we think about gen AI and how we engage with it will continue to change. 


Enhancing Cloud Security in Response to Growing Digital Threats

Security challenges are unique to hybrid cloud environments where public clouds combine with on-premises infrastructure. Secure migration tools and techniques are vital to prevent data leaks or unauthorized access. Encrypt data before transferring and place controls on both ends during migration to reduce associated risks. Network segmentation in hybrid cloud environments requires thorough interconnectivity planning. Carefully configure firewall connections, firewalls, and network access controls to ensure only authorized traffic flows between on-premises resources and those hosted within the cloud. Visibility across hybrid cloud environments requires centralized monitoring to enhance threat detection capability. SIEM solutions can collect security logs from both on-premises and cloud systems, helping provide a unified view of an enterprise’s security posture. The more organizations embrace cloud computing, the more preparation for emerging trends is required. Zero-trust security models, which allow continuous authentication and authorization regardless of the device or location, are increasingly popular.


Ethical Issues in Information Technology (IT)

Establishing ethical IT practices is also important because people’s trust in the tech industry chips away each time they learn about unethical practices, especially in the wake of reports on data usage by companies such as Facebook and Google. “If companies don’t have ethical IT practices in place, they’re going to lose the trust of their customers and clients,” says Ferebee. “IT professionals need to take it seriously. They also need to let the public know they take it seriously so the public feels safe using their products and services.” Whether or not you’re in a leadership position, it is important to lead by example when it comes to ethics in IT. “People are often afraid to speak up because they’re concerned with the repercussions,” says Ferebee. “But when it comes to ethics in IT, you need to speak up — lead by example, advocate for it, and talk about it all the time. That could include reporting ethical issues, sourcing or creating and then implementing ethics training, and developing internal frameworks for your IT department. You don’t have to be the director of IT to start implementing this.”


Establishing Trust in AI Systems: 5 Best Practices for Better Governance

Security culture drives both behaviors and beliefs. A security-first organization promotes information sharing, transparency, and collaboration. When risks are discovered, or when issues occur, communication should be immediate and designed to clearly convey to employees how their behaviors and actions can both support and detract from security efforts. Enlist employees in these efforts by ensuring that your culture is positive and supportive. ... Security culture does not exist in a vacuum and does not evolve in a silo. Input from a wide range of stakeholders—from employees to customers and partners, regulators and the board—is critical for ensuring that you understand how AI is enabling efficiencies, and where risks may be emerging. ... By seeking input from key constituents in an open and transparent manner, they will be more likely to share their concerns and help uncover potential risks while there’s still time to adequately address those risks. Acknowledge and respond to feedback promptly and highlight the positive impacts of that feedback.Tackling third-party risks



Quote for the day:

"Don't wait for the perfect moment take the moment and make it perfect." -- Aryn Kyle

Daily Tech Digest - June 07, 2024

Technology, Regulations Can't Save Orgs From Deepfake Harm

Deepfakes have already become a tool for attackers behind business-leader impersonation fraud — in the past referred to as business email compromise (BEC) — where AI-generated audio and video of a corporate executive are used to fool lower-level employees into transferring money or taking other sensitive actions. In an incident disclosed in February, for example, a Hong Kong-based employee of a multinational corporation transferred about $25.5 million after attackers used deepfakes during a conference call to instruct the worker to make the transfers. ... Creating trusted channels of communication should be a priority for all companies, and not just for sensitive processes — such as initiating a payment or transfer — but also for communications to the public, says Deep Instinct's Froggett. "The best companies are already preparing, trying to think of the eventualities. ... You need legal, regulatory, and compliance groups — obviously, marketing and communication — to be able to mobilize to combat any misinformation," he says. 


Juniper Networks brings industry’s first and only AIOps to WAN routing, delivering AI-native insight for exceptional experiences

Juniper is introducing a new security insights Mist dashboard within its Premium Analytics product to provide comprehensive security event visibility and persona-based policy activation and threat responses. This increased visibility provides actionable intelligence to security teams, enabling them to quickly identify incidents and respond to threats in real-time—thereby improving the user experience. The security insights dashboard in Premium Analytics also helps break down siloed network and security management. ... Another innovation announced by Juniper, Routing Assurance, brings the company’s high performance, sustainable and versatile enterprise edge routing platforms under the Mist AI and cloud umbrella. ... In addition, Marvis, the industry’s first and only AI-Native VNA with a conversational interface built on more than seven years of learning, has been expanded to cover enterprise WAN edge routing. With Marvis’ conversational interface, IT teams can use simple language queries to identify and fix routing issues, including knowledge base queries powered by Generative AI.


How Sprinting Slows You Down: A Better Way to Build Software

First, start by killing the deadlines. In our model, engineers determine when a feature is ready to ship. They are thus able to make principled engineering decisions about what to implement now versus later, delivering better code than they would when making decisions driven by a two-week deadline. Second, assign smaller teams to features and give them greater scope. Because the teams are smaller (often just one engineer!), many new features are developed in parallel. These solo programmers or small teams own the entirety of implementation from back to front. There are no daily standups and needless communication is eliminated. And because the engineers control the implementation across the stack, they can make principled engineering decisions about how to build their functionality, rather than decisions constrained by the sliver of the codebase they happen to own, delivering a more cohesive implementation. The common thread between these two ideas is that they institutionally support making principled decisions, because good decisions today lead to better outcomes tomorrow. 


Why is site selection so important for the data center industry?

Climate considerations are paramount, with weather conditions impacting hazard exposure and vulnerability. Mitigating natural hazards such as floods, earthquakes, and hurricanes through engineered solutions is essential. Access to major highways and airports ensures logistical efficiency, particularly during construction and operation. The air quality surrounding a site affects equipment performance and employee health, necessitating measures to mitigate pollution. Historical data on natural disasters informs risk management strategies and facility design. Ground conditions must undergo thorough geotechnical investigation to assess structural stability and suitability for construction. The availability of robust communications infrastructure, particularly fiber-optic networks, is critical for seamless connectivity. Low latency, enabled by proximity to subsea cable landing sites and dense fiber networks, is imperative for high-performance applications. Geopolitical stability, regulatory environments, and taxation policies influence site selection decisions. Electrical power availability and cost significantly impact operational expenses, with renewable resources offering sustainability benefits.


Maximizing SaaS application analytics value with AI

AI analytics tools offer businesses the opportunity to optimize conversion rates, whether through form submissions, purchases, sign-ups or subscriptions. AI-based analytics programs can automate funnel analyses (which identify where in the conversion funnel users drop off), A/B tests (where developers test multiple design elements, features or conversion paths to see which performs better) and call-to-action button optimization to increase conversions. Data insights from AI and ML also help improve product marketing and increase overall app profitability, both vital components to maintaining SaaS applications. Companies can use AI to automate tedious marketing tasks (such as lead generation and ad targeting), maximizing both advertising ROI and conversation rates. And with ML features, developers can track user activity to more accurately segment and sell products to the user base. ... Managing IT infrastructure can be an expensive undertaking, especially for an enterprise running a large network of cloud-native applications. AI and ML features help minimize cloud expenditures by automating SaaS process responsibilities and streamlining workflows.


Inside the 'Secure By Design' Revolution

While not legally binding, the pledge encourages those that sign up to show demonstrable progress in each of the seven goals within a year. “One thing that we like, and I think a lot of industry likes, is it allows for flexibility in showing how you meet those goals,” Charley Snyder, head of security policy at Google, tells InformationWeek. If pledge signers are unable to show progress within a year, CISA encourages them to communicate what steps they did take and share what challenges they faced. The agency plans to offer its support throughout the year. “We are going to be working very closely with the pledge signers to help make progress on these pledge goals,” Zabierek explains. “We worked collaboratively with industry to develop the actions, and we're going to maintain that collaboration.” ... Tidelift, a company that partners with open-source maintainers, is not only applying the principles outlined in the pledge to its own software, but it also published an update on the ways it is working to help open-source maintainers achieve the pledge goals.


The next frontier: AI, VR, and the future of educational assessment

One of the most promising applications of AI in assessment is its ability to analyze vast amounts of data to identify patterns and trends in student performance, enabling educators to gain valuable insights into student progress and learning outcomes. By harnessing AI-powered analytics, educators can track student achievement over time, identify areas for improvement, and tailor instruction to address individual learning needs more effectively. ... In addition to AI, Virtual Reality (VR) is revolutionising the assessment landscape by offering immersive and interactive experiences that allows students to engage with content in three-dimensional, multisensory environments, providing opportunities for experiential learning and authentic assessment experiences. Furthermore, VR technology enables educators to assess higher-order thinking skills such as problem-solving, critical thinking, and creativity in ways that are not feasible with traditional assessment methods. Through VR-based scenarios and simulations, students can engage in complex, real-world challenges, make decisions, and experience the consequences of their actions


Cyber insurance isn’t the answer for ransom payments

Contrary to the belief that having cyber insurance increases the likelihood of ransom payments, Veeam’s research indicates otherwise. Despite only a minority of organizations possessing a policy to pay, 81% opted to do so. Interestingly, 65% paid with insurance and another 21% had insurance but chose to pay without making a claim. This implies that in 2023, 86% of organizations had insurance coverage that could have been utilized for a cyber event. The ransoms paid averages to be only 32% of the overall financial impact to an organization post-attack. Moreover, cyber insurance will not cover the entirety of the total costs associated with an attack. Only 62% of the overall impact is in some way reclaimable through insurance or other means, with everything else going against the organization’s bottom-dollar budget. ... Alarmingly, 63% of organizations are at risk of reintroducing infections while recovering from ransomware attacks or significant IT disasters. Pressured to restore IT operations quickly and influenced by executives, many organizations skip vital steps, such as rescanning data in quarantine, causing the likelihood of IT teams to inadvertently restore infected data or malware.


Generative AI agents will revolutionize AI architecture

AI agents possess advanced natural language processing (NLP) capabilities. They can comprehend, interpret, and generate human language, facilitating easy interaction and communication with users and other systems. These agents also work alongside other AI agents or human operators in collaborative and iterative workflows. Through continuous learning and feedback, they refine their outputs and improve overall performance. On paper, AI agents should be in wide use today. Look at all the pros I’ve listed. The downsides are much more difficult to understand. Even though you need tools to build AI agents, the tools are all over the place regarding what they are and how to use them. Don’t let vendors tell you otherwise. First, these are complex beasties to write and deploy. Architects who can design AI agents and developers who can effectively build AI agents are few and far between. I’ve witnessed teams announce they will use agent-based technology and then build something that falls far short of a solution for the proposed business case. Second, you can’t put much into these AI agents or they are no longer agents. You missed the point if your AI agents are vast clusters of GPUs. 


AI in Healthcare: Bridging the Gap Between Proof and Practice

“We see huge social impacts from AI in healthcare – in the data we’ve collected regionally in Pennsylvania, for example,” Dr. Sadeghian added. “Many rural areas have insufficient access to medical procedures. AI will impact society through both safety and convenience. Everybody has smartphones now; why not have the doctor in your hand? A cultural shift is underway.” AI can give a preliminary screening and keep people out of cities and congested areas, bringing access to more rural areas and saving office visits for people who need them. This also impacts transportation, walkability, and other aspects of civic planning – even pollution mitigation. Inviting the black box of AI into healthcare isn’t some hazy dream. It’s happening today. Younger generations are the most scientifically engaged ever, though, which means consensus-building on tech policy could move faster going forward. Politicians have noticed the social, cultural, and economic value of investing in science, technology, engineering, and mathematics education. 



Quote for the day:

"If you don't value your time, neither will others. Stop giving away your time and talents- start charging for it." -- Kim Garst