Showing posts with label metaverse. Show all posts
Showing posts with label metaverse. Show all posts

Daily Tech Digest - January 24, 2025


Quote for the day:

"Leaders are people who believe so passionately that they can seduce other people into sharing their dream." -- Warren G. Bennis


What comes after Design thinking

The first and most obvious one is that we can no longer afford to design things solely for humans. We clearly need to think in non-human, non-monocentric terms if we want to achieve real, positive, long-term impact. Second, HCD fell short in making its practitioners think in systems and leverage the power of relationships to really be able to understand and redesign what has not been serving us or our planet. Lastly, while HCD accomplished great feats in designing better products and services that solve today’s challenges, it fell short in broadening horizons so that these products and systems could pave the way for regenerative systems: the ones that go beyond sustainability and actively restore and revitalize ecosystems, communities, and resources create lasting, positive impact. Now, everything that we put out in the world needs to have an answer to how it is contributing to a regenerative future. And in order to build a regenerative future, we need to start prioritizing something that is integral to nature: relationships. We need to grow relational capacity, from designing for better interpersonal relationships to establishing systems that facilitate cross-organizational collaboration. We need to think about relational networks and harness their power to recreate more just, trustful, and better functioning systems. We need to think in communities.


FinOps automation: Raising the bar on lowering cloud costs

Successful FinOps automation requires strategies that exploit efficiencies from every angle of cloud optimization. Good data management, negotiations, data manipulation capabilities, and cloud cost distribution strategies are critical to automating cost-effective solutions to minimize cloud spend. This article focuses on how expert FinOps leaders have focused their automation efforts to achieve the greatest benefits. ... Effective automation relies on well-structured data. Intuit and Roku have demonstrated the importance of robust data management strategies, focusing on AWS accounts and Kubernetes cost allocation. Good data engineering enables transparency, visibility, and accurate budgeting and forecasting. ... Automation efforts should focus on areas with the highest potential for cost savings, such as prepayment optimization and waste reduction. Intuit and Roku have achieved significant savings by targeting these high-cost areas. ... Automation tools should be accessible and user-friendly for engineers managing cloud resources. Intuit and Roku have developed tools that simplify resource management and align costs with responsible teams. Automated reporting and forecasting tools help engineers make informed decisions.


Why CISOs Must Think Clearly Amid Regulatory Chaos

At their core, CISOs are truth sayers — akin to an internal audit committee that assesses risks and makes recommendations to improve an organization's defenses and internal controls. Ultimately, though, it's the board and a company's top executives who set policy and decide what to disclose in public filings. CISOs can and should be a counselor for this group effort because they have the understanding of security risk. And yet, the advice they can offer is limited if they don't have full visibility into an organization's technology stack. "Many oversee a company's IT system, but not the products the company sells. That's crucial when it comes to data-dependent systems and devices that can provide network-access targets to cyber criminals. Those might include medical devices, or sensors and other Internet of Things endpoints used in manufacturing lines, electric grids, and other critical physical infrastructure. In short: A company's defenses are only as strong as the board and its top executives allow it to be. And if there is a breach, as in the case of SolarWinds? CISOs do not determine the materiality of a cybersecurity incident; a company's top executives and its board make that call. The CISO's responsibilities in that scenario involves responding to the incident and conducting the follow-up forensics required to help minimize or avoid future incidents.


Building Secure Multi-Cloud Architectures: A Framework for Modern Enterprise Applications

The technical controls alone cannot secure multi-cloud environments. Organizations must conduct cloud security architecture reviews before implementing any multi-cloud solution. These reviews should focus on: Data flow patterns between clouds Authentication and authorization requirements Compliance obligations across all relevant jurisdictions. Completing these tasks thoroughly and diligently will ensure that multi-cloud security is baked into the architectural layer between the clouds and in the clouds themselves. While thorough architecture reviews establish the foundation, automation brings these security principles to life at scale. Automation provides a major advantage to security operations for multi-cloud environments. By treating infrastructure and security as code, organizations can achieve consistent configurations across clouds, implement automated security testing and enable fast response to security events. This helps with the overall security and operational overhead because it allows us to do more with less and to reduce human error. Our security operations experienced a substantial enhancement when we moved to automated compliance checks. Still, we did not just throw AWS services at the problem. We engaged our security team deeply in the process. 


Scaling Dynamic Application Security Testing (DAST)

One solution is to monitor requests sent to the target web server and extrapolate an OpenAPI Specification based on those requests in real-time. This monitoring could be performed client-side, server-side, or in-between on an API gateway, load-balancer, etc. This is a scalable, automatable solution that does not require each developer’s involvement. Depending on how long it runs, this approach can be limited in comprehensively identifying all web endpoints. For example, if no users called the /logout endpoint, then the /logout endpoint would not be included in the automatically generated OpenAPI Specification. Another solution is to statically analyze the source code for a web service and generate an OpenAPI Specification based on defined API endpoint routes that the automation can gleam from the source code. Microsoft internally prototyped this solution and found it to be non-trivial to reliably discover all API endpoint routes and all parameters by parsing abstract syntax trees without access to a working build environment. This solution was also unable to handle scenarios of dynamically registered API route endpoint handlers. ... To truly scale DAST for thousands of web services, we need to automatically, comprehensively, and deterministically generate OpenAPI Specifications.


Post-Quantum Cryptography 2025: The Enterprise Readiness Gap

"Quantum technology offers a revolutionary approach to cybersecurity, providing businesses with advanced tools to counter emerging threats," said David Close, chief solutions architect at Futurex. By using quantum machine learning algorithms, organizations can detect threats faster and more accurately. These algorithms identify subtle patterns that indicate multi-vector cyberattacks, enabling proactive responses to potential breaches. Innovations such as quantum key distribution and quantum random number generators enable unbreakable encryption and real-time anomaly detection, making them indispensable in fraud prevention and secure communications, Close said. These technologies not only protect sensitive data but also ensure the integrity of financial transactions and authentication protocols. A cornerstone of quantum security is post-quantum cryptography, PQC. Unlike traditional cryptographic methods, PQC algorithms are designed to withstand attacks from quantum computers. Standards recently established by the National Institute of Standards and Technology include algorithms such as Kyber, Dilithium and SPHINCS+, which promise robust protection against future quantum threats.


Tricking the bad guys: realism and robustness are crucial to deception operations

The goal of deception technology, also known as deception techniques, operations, or tools, is to create an environment that attracts and deceives adversaries to divert them from targeting the organization’s crown jewels. Rapid7 defines deception technology as “a category of incident detection and response technology that helps security teams detect, analyze, and defend against advanced threats by enticing attackers to interact with false IT assets deployed within your network.” Most cybersecurity professionals are familiar with the current most common application of deception technology, honeypots, which are computer systems sacrificed to attract malicious actors. But experts say honeypots are merely decoys deployed as part of what should be more overarching efforts to invite shrewd and easily angered adversaries to buy elaborate deceptions. Companies selling honeypots “may not be thinking about what it takes to develop, enact, and roll out an actual deception operation,” Handorf said. “As I stressed, you have to know your infrastructure. You have to have a handle on your inventory, the log analysis in your case. But you also have to think that a deception operation is not a honeypot. It is more than a honeypot. It is a strategy that you have to think about and implement very decisively and with willful intent.”


Effective Techniques to Refocus on Security Posture

If you work in software development, then “technical debt” is a term that likely triggers strong reactions. Foundationally, technical debt serves a similar function to financial debt. When well-managed, both can be used as leverage for further growth opportunities. In the context of engineering, technical debt can help expand product offerings and operations, helping a business grow faster than paying the debt with the opportunities offered from the leverage. On the other hand, debt also comes with risks and the rate of exposure is variable, dependent on circumstance. In the context of security, acceptance of technical debt from End of Life (EoL) software and risky decisions enable threats whose greatest advantage is time, the exact resource that debt leverages. ... The trustworthiness of software is dependent on the exploitable attack surface. Part of that attack surface are exploitable vulnerabilities. If the outcome of the SBOM with a VEX attestation is a deeper understanding of those applicable and exploitable vulnerabilities, coupling that information with exploit predictive analysis like EPSS helps to bring valuable information to decision-making. This type of assessment allows for programmatic decision-making. It allows software suppliers to express risk in the context of their applications and empowers software consumers to escalate on problems worth solving.


Sustainability, grid demands, AI workloads will challenge data center growth in 2025

Uptime expects new and expanded data center developers will be asked to provide or store power to support grids. That means data centers will need to actively collaborate with utilities to manage grid demand and stability, potentially shedding load or using local power sources during peak times. Uptime forecasts that data center operators “running non-latency-sensitive workloads, such as specific AI training tasks, could be financially incentivized or mandated to reduce power use when required.” “The context for all of this is that the [power] grid, even if there were no data centers, would have a problem meeting demand over time. They’re having to invest at a rate that is historically off the charts. It’s not just data centers. It’s electric vehicles. It’s air conditioning. It’s carbonization. But obviously, they are also retiring coal plants and replacing them with renewable plants,” Uptime’s Lawrence explained. “These are much less stable, more intermittent. So, the grid has particular challenges.” ... According to Uptime, infrastructure requirements for next-generation AI will force operators to explore new power architectures, which will drive innovations in data center power delivery. As data centers need to handle much higher power densities, it will throw facilities off balance in terms of how the electrical infrastructure is designed and laid out. 


Is the Industrial Metaverse Transforming the E&U Industry?

One major benefit of the industrial metaverse is that it can monitor equipment issues and hazardous conditions in real time so that any fluctuations in the electrical grid are instantly detected. As they collect data and create simulations, digital twins can also function as proactive tools by predicting potential problems before they escalate. “You can see which components are in early stages of failure,” a Hitachi Energy spokesperson notes in this article. “You can see what the impact of failure is and what the time to failure is, so you’re able to make operational decisions, whether it’s a switching operation, deploying a crew, or scheduling an outage, whatever that looks like.” ... Digital twins also make it possible for operators to simulate and test operational changes in virtual environments before real-world implementation, reducing excessive costs. “While it will not totally replace on-site testing, it can significantly reduce physical testing, lower costs and contribute to an increased quality of the protection system,” Andrea Bonetti, a power system protection specialist at Megger, tells the Switzerland-based International Electrotechnical Commission. Shell is one of several energy providers that use digital twins to enhance operations, according to Digital Twin Insider. 


Daily Tech Digest - January 09, 2025

It’s remarkably easy to inject new medical misinformation into LLMs

By injecting specific information into this training set, it's possible to get the resulting LLM to treat that information as a fact when it's put to use. This can be used for biasing the answers returned. This doesn't even require access to the LLM itself; it simply requires placing the desired information somewhere where it will be picked up and incorporated into the training data. And that can be as simple as placing a document on the web. As one manuscript on the topic suggested, "a pharmaceutical company wants to push a particular drug for all kinds of pain which will only need to release a few targeted documents in [the] web." ... rather than being trained on curated medical knowledge, these models are typically trained on the entire Internet, which contains no shortage of bad medical information. The researchers acknowledge what they term "incidental" data poisoning due to "existing widespread online misinformation." But a lot of that "incidental" information was generally produced intentionally, as part of a medical scam or to further a political agenda. ... Finally, the team notes that even the best human-curated data sources, like PubMed, also suffer from a misinformation problem. The medical research literature is filled with promising-looking ideas that never panned out, and out-of-date treatments and tests that have been replaced by approaches more solidly based on evidence.


CIOs are rethinking how they use public cloud services. Here’s why.

Where are those workloads going? “There’s a renewed focus on on-premises, on-premises private cloud, or hosted private cloud versus public cloud, especially as data-heavy workloads such as generative AI have started to push cloud spend up astronomically,” adds Woo. “By moving applications back on premises, or using on-premises or hosted private cloud services, CIOs can avoid multi-tenancy while ensuring data privacy.” That’s one reason why Forrester predicts four out of five so called cloud leaders will increase their investments in private cloud by 20% this year. That said, 2025 is not just about repatriation. “Private cloud investment is increasing due to gen AI, costs, sovereignty issues, and performance requirements, but public cloud investment is also increasing because of more adoption, generative AI services, lower infrastructure footprint, access to new infrastructure, and so on,” Woo says. ... Woo adds that public cloud is costly for workloads that are data-heavy because organizations are charged both for data stored and data transferred between availability zones (AZ), regions, and clouds. Vendors also charge egress fees for data leaving as well as data entering a given AZ. “So for transfers between AZs, you essentially get charged twice, and those hidden transfer fees can really rack up,” she says. 


What CISOs Think About GenAI

“As a [CISO], I view this technology as presenting more risks than benefits without proper safeguards,” says Harold Rivas, CISO at global cybersecurity company Trellix. “Several companies have poorly adopted the technology in the hopes of promoting their products as innovative, but the technology itself has continued to impress me with its staggeringly rapid evolution.” However, hallucinations can get in the way. Rivas recommends conducting experiments in controlled environments and implementing guardrails for GenAI adoption. Without them, companies can fall victim to high-profile cyber incidents like they did when first adopting cloud. Dev Nag, CEO of support automation company QueryPal, says he had initial, well-founded concerns around data privacy and control, but the landscape has matured significantly in the past year. “The emergence of edge AI solutions, on-device inference capabilities, and private LLM deployments has fundamentally changed our risk calculation. Where we once had to choose between functionality and data privacy, we can now deploy models that never send sensitive data outside our control boundary,” says Nag. “We're running quantized open-source models within our own infrastructure, which gives us both predictable performance and complete data sovereignty.”


Scaling RAG with RAGOps and agents

To maximize their effectiveness, LLMs that use RAG also need to be connected to sources from which departments wish to pull data – think customer service platforms, content management systems and HR systems, etc. Such integrations require significant technical expertise, including experience with mapping data and managing APIs. Also, as RAG models are deployed at scale they can consume significant computational resources and generate large amounts of data. This requires the right infrastructure as well as the experience to deploy it, as well as the ability to manage data it supports across large organizations. One approach to mainstreaming RAG that has AI experts buzzing is RAGOps, a methodology that helps automate RAG workflows, models and interfaces in a way that ensures consistency while reducing complexity. RAGOps enables data scientists and engineers to automate data ingestion and model training, as well as inferencing. It also addresses the scalability stumbling block by providing mechanisms for load balancing and distributed computing across the infrastructure stack. Monitoring and analytics are executed throughout every stage of RAG pipelines to help continuously refine and improve models and operations.


Navigating Third-Party Risk in Procurement Outsourcing

Shockingly, only 57% of organisations have enterprise-wide agreements that clearly define which services can or cannot be outsourced. This glaring gap highlights the urgent need to create strong frameworks – not just for external agreements, but also for intragroup arrangements. Internal agreements, though frequently overlooked, demand the same level of attention when it comes to governance and control. Without these solid frameworks, companies are leaving themselves exposed to risks that could have been mitigated with just a little more attention to detail. Ongoing monitoring is also crucial to TPRM; organisations must actively leverage audit rights, access provisions and outcome-focused evaluations. This means assessing operational and concentration risks through severe yet plausible scenarios, ensuring they’re prepared for the worst-case while staying vigilant in everyday operations. ... As the complexity of third-party risk grows, so too does the role of AI and automation. The days of relying on spreadsheets and homegrown databases are long gone. Ed’s thoughts on this topic are unequivocal: “AI and automation are critical as third-party risk becomes increasingly complex. Significant work is required for initial risk assessments, pre-contract due diligence, post-contract monitoring, SLA reviews and offboarding.”


Five Ways Your Platform Engineering Journey Can Derail

Chernev’s first pitfall is when a company tries to start platform engineering by only changing the name of its current development practices, without doing the real work. “Simply rebranding an existing infrastructure or DevOps or SRE practice over to platform engineering without really accounting for evolving the culture within and outside the team to be product-oriented or focused” is a huge mistake ... Another major pitfall, he said, is not having and maintaining product backlogs — prioritized lists of work for the development team — that are directly targeting your developers. “For the groups who have backlogs, they are usually technology-oriented,” he said. “That misalignment in thinking across planning and missing feedback loops is unlikely to move progress forward within the organization. That ultimately leads the initiative to fail to deliver business value. Instead, they should be developer-centric,” said Chernev. ... This is another important point, said Chernev — companies that do not clearly articulate the value-add of their platform engineering charter to both technical and non-technical stakeholders inside their operations will not fully be able to reap the benefits of the platform’s use across the business.


Building generative AI applications is too hard, developers say

Given the number of tools they need to do their job, it’s no surprise that developers are loath to spend a lot of time adding another to their arsenal. Two thirds of them are only willing to invest two hours or less in learning a new AI development tool, with a further 22% allocating three to five hours, and only 11% giving more than five hours to the task. And on the whole, they don’t tend to explore new tools very often — only 21% said they check out new tools monthly, while 78% do so once every one to six months, and the remaining 2% rarely or never. The survey found that they tend to look at around six new tools each time. ... The survey highlights the fact that, while AI and generative AI are becoming increasingly important to businesses, the tools and techniques require to develop them are not keeping up. “Our survey results shed light on what we can do to help address the complexity of AI development, as well as some tools that are already helping,” Gunnar noted. “First, given the pace of change in the generative AI landscape, we know that developers crave tools that are easy to master.” And, she added, “when it comes to developer productivity, the survey found widespread adoption and significant time savings from the use of AI-powered coding tools.”


AI infrastructure – The value creation battleground

Scaling AI infrastructure isn’t just about adding more GPUs or building larger data centers – it’s about solving fundamental bottlenecks in power, latency, and reliability while rethinking how intelligence is deployed. AI mega clusters are engineering marvels – data centers capable of housing hundreds of thousands of GPUs and consuming gigawatts of power. These clusters are optimized for machine learning workloads with advanced cooling systems and networking architectures designed for reliability at scale. Consider Microsoft’s Arizona facility for OpenAI: with plans to scale up to 1.5 gigawatts across multiple sites, it demonstrates how these clusters are not just technical achievements but strategic assets. By decentralizing compute across multiple data centers connected via high-speed networks, companies like Google are pioneering asynchronous training methods to overcome physical limitations such as power delivery and network bandwidth. Scaling AI is an energy challenge. AI workloads already account for a growing share of global data center power demand, which is projected to double by 2026. This creates immense pressure on energy grids and raises urgent questions about sustainability.


4 Leadership Strategies For Managing Teams In The Metaverse

Leaders must develop new skills and adopt innovative strategies to thrive in the metaverse. Here are some key approaches:Invest in digital literacy—Leaders must become fluent in the tools and technologies that power the metaverse. This includes understanding VR/AR platforms, blockchain applications and collaborative software such as Slack, Trello and Figma. Emphasize inclusivity—The metaverse has the potential to democratize access to opportunities, but only if it’s designed with inclusivity in mind. Leaders should ensure that virtual spaces are accessible to employees of all abilities and backgrounds. This might include providing hardware like VR headsets or ensuring platforms support diverse communication styles. Create rituals for connection—Leaders can foster connection through virtual rituals and gatherings in the absence of physical offices. These activities, from weekly team check-ins to informal virtual “watercooler” chats, help build camaraderie and maintain a sense of community. Focus on well-being—Effective leaders prioritize employee well-being by setting clear boundaries, encouraging breaks and supporting mental health.


How AI will shape work in 2025 — and what companies should do now

“The future workforce will likely collaborate more closely with AI tools. For example, marketers are already using AI to create more personalized content, and coders are leveraging AI-powered code copilots. The workforce will need to adapt to working alongside AI, figuring out how to make the most of human strengths and AI’s capabilities. “AI can also be a brainstorming partner for professionals, enhancing creativity by generating new ideas and providing insights from vast datasets. Human roles will increasingly focus on strategic thinking, decision-making, and emotional intelligence. ... “Companies should focus on long-term strategy, quality data, clear objectives, and careful integration into existing systems. Start small, scale gradually, and build a dedicated team to implement, manage, and optimize AI solutions. It’s also important to invest in employee training to ensure the workforce is prepared to use AI systems effectively. “Business leaders also need to understand how their data is organized and scattered across the business. It may take time to reorganize existing data silos and pinpoint the priority datasets. To create or effectively implement well-trained models, businesses need to ensure their data is organized and prioritized correctly.



Quote for the day:

"The world is starving for original and decisive leadership." -- Bryant McGill

Daily Tech Digest - September 16, 2024

AI Ethics – Part I: Guiding Principles for Enterprise

The world has now caught up to what was previously science fiction. We are now designing AI that is in some ways far more advanced than anything Isaac Asimov could have imagined, while at the same time being far more limited. Even though they were originally conceived as fictional principles, there have been efforts to adapt and enhance Isaac Asimov’s Three Laws of Robotics to fit modern enterprise AI-based solutions. Here are some notable examples: Human-Centric AI Principles - Modern AI ethics frameworks often emphasize human safety and well-being, echoing Asimov’s First Law. ... Ethical AI Guidelines - Enterprises are increasingly developing ethical guidelines for AI that align with Asimov’s Second Law. These guidelines ensure that AI systems obey human instructions while prioritizing ethical considerations. ... Bias Mitigation and Fairness - In line with Asimov’s Third Law, there is a strong focus on protecting the integrity of AI systems. This includes efforts to mitigate biases and ensure fairness in AI outputs. ... Enhanced Ethical Frameworks - Some modern adaptations include additional principles, such as the “Zeroth Law,” which prioritizes humanity’s overall well-being. 


Power of Neurodiversity: Why Software Needs a Revolution

Neurodiversity, which includes ADHD, autism spectrum disorder, and dyslexia, presents unique challenges for individuals, yet it also comes with many unique strengths. People on the autism spectrum often excel in logical thinking, while individuals with ADHD can demonstrate exceptional attention to detail when engaged in areas of interest. Those with dyslexia frequently display creative thinking skills. However, software design often fails to accommodate neurodiverse users. For example, websites or apps with cluttered interfaces can overwhelm users with ADHD, while those sites that rely heavily on text make it harder for individuals with dyslexia to process information. Additionally, certain sounds or colors, such as bright colors, may be overwhelming for someone with autism. Users do not have to adapt to poorly designed software. Instead, software designers must create products designed to meet these user needs. Waiting to receive software accessibility training on the job may be too late, as software designers and developers will need to relearn foundational skills. Moreover, accessibility still does not seem to be a priority in the workplace, with most job postings for relevant positions not requiring these skills.


Protect Your Codebase: The Importance of Provenance

When you know that provenance is a vector for a software supply chain attack, you can take action to protect it. The first step is to collect the provenance data for your dependencies, where it exists; projects that meet SLSA level 1 or higher produce provenance data you can inspect and verify. Ensure that trusted identities generate provenance. If you can prove that provenance data came from a system you own and secured or from a known good actor, it’s easier to trust. Cryptographic signing of provenance records provides assurance that the record was produced by a verifiable entity — either a person or a system with the appropriate cryptographic key. Store provenance data in a write-once repository. This allows you to verify later if any provenance data was modified. Modification, whether malicious or accidental, is a warning sign that your dependencies have been tampered with somehow. It’s also important to protect the provenance you produce for yourself and any downstream users. Implement strict access and authentication controls to ensure only authorized users can modify provenance records. 


Are You Technical or Non-Technical? Time to Reframe the Discussion

The term “technical” can introduce bias into hiring and career development, potentially leading to decisions swayed more by perception than by a candidate’s qualifications. Here, hiring decisions can sometimes reflect personal biases if candidates do not fit a stereotypical image or lack certain qualifications not essential for the role. For instance, a candidate might be viewed as not technical enough if they lack server administration experience, even when the job primarily involves software development. Unconscious bias can skew evaluations, leading to decisions based more on perceptions than actual skills. To address this issue, it is important to clearly define the skills required for a position. For example, rather than broadly labeling a candidate as “not technical enough,” it is more effective to specify areas for improvement, such as “needs advanced database management skills.” This approach not only highlights areas where candidates excel, such as developing user-centric reports, but also clarifies specific shortcomings. Clearly stating requirements, such as “requires experience building scalable applications with technology Y,” enhances the transparency and objectivity of the hiring process.


Will Future AI Demands Derail Sustainable Energy Initiatives?

The single biggest thing enterprises are doing to address energy concerns is moving toward more energy efficient second-generation chips, says Duncan Stewart, a research director with advisory firm Deloitte Technology, via email. "These chips are a bit faster at accelerating training and inference -- about 25% better than first-gen chips -- and their efficiency is almost triple that of first-generation chips." He adds that almost every chipmaker is now targeting efficiency as the most important chip feature In the meantime, developers will continue to play a key role in optimizing AI energy needs, as well as validating whether AI is even required to achieve a particular outcome. "For example, do we need to use a large language model that requires lots of computing power to generate an answer from enormous data sets, or can we use more narrow and applied techniques, like predictive models that require much less computing because they’ve been trained on much more specific and relevant data sets?" Warburton asks. "Can we utilize compute instances that are powered by low-carbon electricity sources?


When your cloud strategy is ‘it depends’

As for their use of private cloud, some of the rationale is purely a cost calculation. For some workloads, it’s cheaper to run on premises. “The cloud is not cheaper. That’s a myth,” one of the IT execs told me, while acknowledging cost wasn’t their primary reason for embracing cloud anyway. I’ve been noting this for well over a decade. Convenience, not cost, tends to drive cloud spend—and leads to a great deal of cloud sprawl, as Osterman Research has found. ... You want developers, architects, and others to feel confident with new technology. You want to turn them into allies, not holdouts. Jassy declared, “Most of the big initial challenges of transforming the cloud are not technical” but rather “about leadership—executive leadership.” That’s only half true. It’s true that developers thrive when they have executive air cover. This support makes it easier for them to embrace a future they likely already want. But they also need that executive support to include time and resources to learn the technologies and techniques necessary for executing that new direction. If you want your company to embrace new directions faster, whether cloud or AI or whatever it may be, make it safe for them to learn. 


4 steps to shift from outputs to outcomes

Shifting the focus to outcomes — business results aligned with strategic goals — was the key to unlocking value. David had to teach his teams to see the bigger picture of their business impact. By doing this, every project became a lever to achieve revenue growth, cost savings, and customer satisfaction, rather than just another task list. Simply being busy doesn’t mean a project is successful in delivering business value, yet many teams proudly wear busy badges, leaving executives wondering why results aren’t materializing. Busy doesn’t equal productive. In fact, busy gets in the way of being productive. ... A common issue is project teams lose sight of how their work aligns with the company’s broader goals. When David took over, his teams were still disconnected from those strategic objectives, but by revisiting them and ensuring that every project directly supported those goals, the teams could finally see they were part of something much larger than just a list of tasks. Many business leaders think their teams are mind readers. They hold a town hall, send out a slide deck, and then expect everyone to get it. But months later, they’re surprised when the strategy starts slipping through their fingers.


Is Your Business Ready For The Inevitable Cyberattack?

Cybersecurity threats are inevitable, making it essential for businesses to prepare for the worst. The critical question is: if your business is hacked, is your data protected, and can you recover it in hours rather than days or weeks? If not, you are leaving your business vulnerable to severe disruptions. While everyone emphasises the importance of backups, the real challenge lies in ensuring their integrity and recoverability. Are your backups clean? Can you quickly restore data without prolonged downtime? The total cost of ownership (TCO) of your data protection strategy over time is a crucial consideration. Traditional methods, such as relying on Iron Mountain for physical backups, are cumbersome and time-bound, requiring significant effort to locate and restore data. ... The story of data storage, much like the shift to cloud computing, revolves around strategically placing the right parts of your business operations in the most suitable locations at the right times. Data protection follows the same principle. Resilience is still a topic of frequent discussion, yet its broad nature makes it challenging to establish a clear set of best practices.


Digital twin in the hospitality industry-Innovations in planning & designing a hotel

The Metaverse is revolutionising how it became a factual virtual reality tour of rooms and services experienced by guests during their visit, for which the guest is provided the chance to preview before booking. Moreover, hotels can provide tailored virtual experiences through interactive concierge services and bespoke room décor options. More events will be held through immersive games and entertaining interactivity, bringing better visitor experiences to the hospitality industry. It can generate revenue through tickets, sponsorships, and virtual item sales. ... Operational efficiency is the bottom line of hospitality, where everything seems small but matters so much for guest satisfaction. Imagine the case where the HVAC system of a hotel or its lighting is controlled by some model of a digital twin. Managers will thus understand the energy consumption patterns and predict what will require maintenance so they can change those settings accordingly based on real-time data. Digital twins enable staff and resources to be trained better. Staff can be comfortable with changes in procedures and layout beforehand by interacting with the virtual model. 


The cybersecurity paradigm shift: AI is necessitating the need to fight fire with fire

Organisations should be prepared for the worst-case scenario of a cyber-attack to establish cyber resilience. This involves being able to protect and secure data, detect cyber threats and attacks, and respond with automated data recovery processes. Each element is critical to ensuring an organization can maintain operational integrity under attack. ... However, the reality is that many organisations are unable to keep up. From the company's recent survey released in late January 2024, 79% of IT and security decision-makers said they did not have full confidence in their company’s cyber resilience strategy. Just 12% said their data security, management, and recovery capabilities had been stress tested in the six months prior to being surveyed. ... To bolster cyber resilience, companies must integrate a robust combination of people, processes, and technology. Fostering a skilled workforce equipped to detect and respond to threats effectively starts with having employee education and training in place to keep pace with the rising sophistication of AI-driven phishing attacks.



Quote for the day:

"If you want to achieve excellence, you can get there today. As of this second, quit doing less-than-excellent work." -- Thomas J. Watson

Daily Tech Digest - August 29, 2024

The human factor in the industrial metaverse

The virtualisation of factories might ensure additional efficiencies, but it has the potential to fundamentally alter the human dynamics within an organisation. With rising reliance on digital tools, it gets challenging to maintain the human aspects of work. ... Just like evolving innovation is crucial, so is organisational culture. Leaders must promote a culture that supports agility, innovation, and continuous learning to ensure success in a virtual factory environment. This can be achieved by being transparent, encouraging experimentation, and recognising and rewarding an employee’s creativity and adaptability. With the rapid evolution of virtual factories employees must undergo comprehensive training that covers both technical and soft skills to adapt to the virtual environment. While practical, hands-on exercises are crucial for real-world application, it’s also important to have continuous learning with ongoing workshops, online training, and cross-training opportunities. To further enhance knowledge sharing, establishing mentorship and peer-learning programs can ensure a smooth transition, fostering a cohesive and productive workforce.


Challenging The Myths of Generative AI

The productivity myth suggests that anything we spend time on is up for automation — that any time we spend can and should be freed up for the sake of having even more time for other activities or pursuits — which can also be automated. The importance and value of thinking about our work and why we do it is waved away as a distraction. The goal of writing, this myth suggests, is filling a page rather than the process of thought that a completed page represents. ... The prompt myth is a technical myth at the heart of the LLM boom. It was a simple but brilliant design stroke: rather than a window where people paste text and allow the LLM to extend it, ChatGPT framed it as a chat window. We’re used to chat boxes, a window that waits for our messages and gets a (previously human) response in return. In truth, users provide words that dictate what we get back. ... Intelligence myths arise from the reliance on metaphors of thinking in building automated systems. These metaphors – learning, understanding, and dreaming – are helpful shorthand. But intelligence myths rely on hazy connections to human psychology. They often conflate AI systems inspired by models of human thought for a capacity to think.


The New Frontiers of Cyber-Warfare: Insights From Black Hat 2024

Corporate sanctions against nations are just one aspect of the broader issue. Moss also spoke about a new kind of trade war, where nation-states are pushing back against big tech companies and their political and economic agendas – along with the agendas of countries where these companies are based. Moss noted that countries are now using digital protectionist policies to wage what he called "a new way to escalate." He cited India's 2020 ban on TikTok, which resulted in China’s ByteDance reportedly facing up to $6 billion in losses. Moss also discussed the phenomenon of “app diplomacy,” where governments dictate to big tech companies like Apple and Google which apps are permitted in their markets. He mentioned the practice of “tech sorting,” where countries try to maintain strict control over foreign tech through redirection, throttling, or direct censorship. ... Shifting from concerns over AI to the emerging weapons of cyber espionage and warfare, Moss, moderating Black Hat’s wrap-up discussion, brought up the growing threat of hardware attacks. He asked Jos Wetzels, partner at Midnight Blue, to discuss the increasing accessibility of electromagnetic (EM) and laser weapons.


5 best practices for running a successful threat-informed defense in cybersecurity

Assuming organizations are doing vulnerability scanning across systems, applications, attack surfaces, cloud infrastructure, etc., they will come up with lists of tens of thousands of vulnerabilities. Even big, well-resourced enterprises can’t remediate this volume of vulnerabilities in a timely fashion, so leading firms depend upon threat intelligence to guide them into fixing those vulnerabilities most likely to be exploited presently or in the near future. ... As previously mentioned, a threat-informed defense involves understanding adversary TTPs, comparing these TTPs to existing defenses, identifying gaps, and then implementing compensating controls. These last steps equate to reviewing existing detection rules, writing new ones, and then testing them all to make sure they detect what they are supposed to. Rather than depending on security tool vendors to develop the right detection rules, leading organizations invest in detection engineering across multiple toolsets such as XDR, email/web security tools, SIEM, cloud security tools, etc. CISOs I spoke with admit that this can be difficult and expensive to implement. 


Let’s Bring H-A-R-M-O-N-Y Back Into Our Tech Tools

The focus of a platform approach is on harmonized experiences: a state of balance, agreement and even pleasant interaction among the various elements and stakeholders involved in development. There needs to be a way to make it easy and enjoyable to build, test and release at the pace of today’s business without the annoying dependencies that bog down developers along the way — on both the application and infrastructure sides. I believe tool stacks and platforms that use a harmony-focused method can even bring the fun back into development. ... Resilience refers to the ability to withstand and recover from failures and disruptions, and you can’t follow a harmonized approach without it. A resilient architecture is designed to handle unexpected challenges — be they spikes in traffic, hardware malfunctions or software bugs — without compromising core functionality. How do you create resiliency? Through running, testing and debugging your code to catch errors early and often. Building a robust testing foundation can look like having a dedicated testing environment and ephemeral testing features. 


Cybersecurity Maturity: A Must-Have on the CISO’s Agenda

The process of maturation in personnel is often reflected in the way these teams are measured. Less mature teams tend to be measured on activity metrics and KPIs around how many tickets are handled and closed, for example. In more mature organisations the focus has shifted towards metrics like team satisfaction and staff retention. This has come through strongly in our research. Last year 61% of cybersecurity professionals surveyed said that the key metric they used to assess the ROI of cybersecurity automation was how well they were managing the team in terms of employee satisfaction and retention – another indication that it is reaching a more mature adoption stage. Organizations with mature cybersecurity approaches understand that tools and processes need to be guided through the maturity path, but that the reason for doing so is to serve the people working with them. The maturity and skillsets of teams should also be reviewed, and members should be given the opportunity to add their own input. What is their experience of the tools and processes in place? Do they trust the outcomes they are getting from AI- and machine learning-powered tools and processes? 


What can my organisation do about DDoS threats?

"Businesses can prevent attacks using managed DDoS protection services or through implementing robust firewalls to filter malicious traffic and deploying load balancers to distribute traffic evenly when under heavy load,” advises James Taylor, associate director, offensive security practice, at S-RM. “Other defences include rate limiting, network segmentation, anomaly detection systems and implementing responsive incident management plans.” But while firewalls and load balancers may stop some of the more basic DDoS attack types, such as SYN floods or fragmented packet attacks, they are unlikely to handle more sophisticated DDoS attacks which mimic legitimate traffic, warns Donny Chong, product and marketing director at DDoS specialist Nexusguard. “Businesses should adopt a more comprehensive approach to DDoS mitigation such as managed services,” he says. “In this setup, the most effective approach is a hybrid one, combining cloud-based mitigation with on-premises hardware which be managed externally by the DDoS specialist provider. It also combines robust DDoS mitigation with the ability to offload traffic to the designated cloud provider as and when needed.”


How Aspiring Software Developers Can Stand Out in a Tight Job Market: 5 FAQs

While technical skills are critical, the ability to listen to clients, understand their problems and translate technical information into simple language is also important. Without reliable soft skills, clients may doubt your ability to address their needs. Employers also want candidates who can collaborate and work effectively in a team setting. This involves taking initiative, having strong written and verbal communication skills and being proactive about sharing status updates. Demonstrate these skills by discussing how you applied them in college extracurriculars or in the classroom as part of group project work, and how you plan to apply them in the workplace. In a highly competitive job market, doing so may set you apart from other candidates who offer similar technical backgrounds. ... Research the company before applying for a role so you're prepared with thoughtful questions for your interview. For example, you might want to ask about the new hire onboarding process, professional development opportunities, company culture or specific questions regarding a project the interviewer has recently worked on.


Bridging the AI Gap: The Crucial Role of Vectors in Advancing Artificial Intelligence

Vector databases have recently emerged into the spotlight as the go-to method for capturing the semantic essence of various entities, including text, images, audio, and video content. Encoding this diverse range of data types into a uniform mathematical representation means that we can now quantify semantic similarities by calculating the mathematical distance between these representations. This breakthrough enables “fuzzy” semantic similarity searches across a wide array of content types. While vector databases aren’t new and won’t resolve all current data challenges, their ability to perform these semantic searches across vast datasets and feed that information to LLMs unlocks previously unattainable functionality. ... We are in the early stages of leveraging vectors, both in the emerging generative AI space and the classical ML domain. It’s important to recognise that vectors don’t come as an out-of-the-box solution and can’t simply be bolted onto existing AI or ML programs. However, as they become more prevalent and universally adopted, we can expect the development of software layers that will make it easier for less technical teams to apply vector technology effectively.


AI Can Reshape Insight Delivery and Decision-making

Moving on to risk, Tubbs shares that AI plays a pivotal role in the organizational risk mitigation strategy. With AI, the organization can identify potential risks and propose countermeasures that can significantly contribute to business stability. Therefore, Visa can be proactive in fighting fraud and risks, specifically in the payment landscape. Another usage of AI at Visa is in making real-time decisions with real-time analytics. Given the billions of transactions a month, real-time analytics enable the organization to comprehend what the transactions mean and how to make prompt decisions around anomalous behavior. AI also fosters collaboration in the ecosystem and organization by encouraging different teams to work towards a shared objective. Summing up, she refers to the cost-saving aspect of AI and maintains that Visa is driven to automate processes that have taken a significant amount of time historically. Shifting to the other side of good AI, Tubbs affirms that AI can also be used by fraudsters for nefarious reasons. To avoid that, Visa constantly evaluates its models and algorithms. She notes that Visa has a dedicated team to look into the dark web to understand the actions of fraudsters.



Quote for the day:

"Successful and unsuccessful people do not vary greatly in their abilities. They vary in their desires to reach their potential." -- John Maxwell

Daily Tech Digest - April 01, 2024

The Future Is Now: AI and Risk Management in 2024

Generative AI can write code, generate personalized content, and even compose music. IT and business leaders are incorporating custom LLMs into their workflows to maximize worker productivity and streamline routine procedures. Governments are developing regulations for the use of all forms of AI, with the goal of making it safer for humans to use. Organizations are also writing internal guidelines to clarify and standardize the deployment of AI in their own operations. ... Generic, generative AI isn’t reliable – especially when lives are on the line. AI for risk management is tailored to industry-specific needs, providing accurate, relevant data to allow for expedited decisions and swift actions. It considers historic threats and delivers updates in real time as events unfold. It can also understand linguistic nuances specific to risk management. ... AI-powered risk intelligence must be continuously fed quality data vetted by expert data scientists who specialize in machine learning and risk intelligence. The technology should be monitored and trained by humans to ensure it provides only the most accurate and trustworthy information.


Strengthening Cyber Security

In today’s interconnected digital ecosystem, the ramifications of a cyber breach extend far beyond mere data loss. They can disrupt essential services, compromise national security and erode public trust in Government institutions. Therefore, the imperative to fortify cyber security measures cannot be overstated. Regular security audits serve as a pre-emptive measure to identify vulnerabilities, assess risks and implement corrective actions before they can be exploited by malicious actors. By mandating such audits, the GAD is not only demonstrating foresight but also fostering a culture of vigilance and accountability within Government Departments. Moreover, the emphasis on engaging CERT-in-empanelled agencies underscores the importance of leveraging expertise and best practices in cyber security. ... Equally crucial is the timely implementation of audit findings and recommendations. Too often, audits yield valuable insights that languish in bureaucratic inertia, leaving vulnerabilities unaddressed. This entails establishing clear lines of responsibility, allocating adequate resources, and instituting mechanisms for continuous monitoring and evaluation.


The Complexity and Need to Manage Mental Well-Being in the Security Team

“Security people are often overwhelmed,” comments Thea Mannix, director of research (and a cyberpsychologist) at Praxis Security Labs. Apart from the day to day work, she adds, “They’re expected to be futurologists able to predict the future, and psychologists able to understand the human elements of security – how users may react to social engineering and how they may subvert security controls to make work easier. And they never get any positive feedback; it’s mostly negative because the whole process of security is mostly negative – stop the outside bad guys doing anything bad, and stop the inside good guys doing anything wrong.” But there’s also a disturbing edge to this ‘human’ side of cybersecurity. Security teams sometimes work with SBIs and the FBI on criminal investigations. Tim Morris, chief security advisor at Tanium, knows what can be involved because he and his team have done this. “We do cybersecurity to protect data and people. And the only reason we must do this is because there’s an evil side of humanity. 


How to Become a Cyber Security Analyst? A Step By Step Guide

If you’re contemplating a career in cybersecurity, you’re positioned advantageously. The field is experiencing a robust surge and is poised for continued expansion in the coming years. ... The core responsibility of a cyber security analyst revolves around safeguarding a company’s network and systems against cyber attacks. This entails various tasks such as researching emerging IT trends, devising contingency plans, monitoring for suspicious activities, promptly reporting security breaches, and educating the organization’s staff on security protocols. Additionally, cyber security analysts play a pivotal role in implementing threat protection measures and security controls. They may conduct simulated security attacks to identify potential vulnerabilities within the organization’s infrastructure. Given the ever-evolving tactics and tools employed by hackers, cyber security analysts must remain abreast of the latest developments in digital threats. Staying informed about emerging cyber threats ensures that analysts can effectively anticipate and counteract evolving security risks.


Viewpoint: AI Is Changing the Cyber Risk Landscape

Any time an organization adopts new technology, that organization inherently opens itself up to risk by introducing a new set of unknowns to their business practices. Allowing the wrong users access to a program, for example, or flaws in the program’s code, are technological issues that can create security vulnerabilities which need to be addressed by IT and cybersecurity professionals. The practice of hacking – where cybercriminals use code to break through an organization’s cybersecurity systems – is increasingly difficult, but the sudden ubiquity of AI offers a new way to create vulnerabilities by targeting system users with lifelike dupes. Emails that look genuine but are designed to extract important security credentials are not a new phenomenon, but generative AI has allowed new, sophisticated forms of phishing attacks to proliferate on an unprecedented scale. Deepfakes are a convincing new form of cyberattack where criminals develop highly convincing visual and auditory assets to impersonate others. 


Achieving Data Excellence: How Generative AI Revolutionizes Data Integration

Data integration combines data from various sources for a unified view. Artificial intelligence (AI) improves integration by automating tasks, boosting accuracy, and managing diverse data volumes. Here are the top four data integration strategies/patterns using AI: Automated data matching and merging – AI algorithms, such as ML and natural language processing (NLP), can match and automatically merge data from disparate sources. Real-time data integration – AI technologies, such as stream processing and event-driven architectures, can facilitate real-time data integration by continuously ingesting, processing, and integrating data as it becomes available. Schema mapping and transformation – AI-driven tools can automate the process of mapping and transforming data schemas from different formats or structures. This includes converting data between relational databases, NoSQL databases, and other data formats — plus handling schema evolution over time. Knowledge graphs and graph-based integration – AI can build and query knowledge graphs representing relationships between entities and concepts. 


Generative AI, the Digital Divide, and the Tremendous Opportunity

Access to generative AI and tools such as ChatGPT is becoming the new table stakes for many walks of life. As businesses implement this technology, workers will need to be upskilled. Children -- the next generation of workers -- will need access at school and at home. And leaders must continually strive for an AI future that’s fair and equitable. The reality is that those who know how to use AI tools will have a significant advantage over those who don’t. For telecoms, adopting an ESG-principled strategy to introduce or enhance broadband services in low-income areas is more than an opportunity to enhance the quality of life among people who have been historically under-connected; it can also build their brand and grow their subscriber base. By aligning business strategy with a clear ESG framework, companies can significantly reduce the cost of borrowing when raising capital to invest in major GenAI infrastructure projects. In the KPMG US 2023 ESG and Financial Value survey, 43% of business leaders at companies with more than 10,000 employees cited access to new capital sources as one of the top financial benefits of their ESG strategies.


How to design and deliver an effective cybersecurity exercise

Playbooks come in a variety of styles, including action plans, flow charts and storylines. They are based on cyber-attack scenarios and are used by facilitators to guide participants throughout the cybersecurity exercise. They include pieces of information for participants (e.g., indicators of compromise, a customer complaint, a help desk report, a piece of threat intelligence or a SOC alert), as well as key stages of the exercise. ... An appropriate target audience must be identified before considering the type of cyber exercise to perform. Audiences can consist of different functions, levels and areas of an organization such as executives, crisis management, incident response or operational teams (among others). The audience will shape the objectives, injects, discussion areas and storyline of the scenario. Tailoring these specifically to an audience is paramount to conducting a successful exercise. ... The organization must select suitable targets for cybersecurity exercises. Targets can comprise one or more types of assets, such as critical business applications, technical infrastructure, physical devices, people, or office/factory locations.


How data centers can drive a more sustainable world

In spite of the race to greater sustainability, the demand for more data speed and bandwidth is still driving the need for increased DCI network performance, capacity, and scalability. In order to meet these demands while maintaining competitiveness, hyperscalers must cut costs and streamline operational efficiency, even as they enhance capacity. Traditionally, the wavelength range of ROADM networks was limited because conventional transmission solutions were divided into C-band and L-band. This is particularly problematic in metro and long-haul DCI networks. Recent advances in continuous C+L ROADM architecture allow transport platforms to handle both wavelength bands at once. This not only increases maximum transmission capacity and simplifies network capacity upgrades, it also reduces environmental impact by using half the common line system equipment versus current bolt-on or C-band overbuild approaches.In addition to capacity enhancements, technology advances are contributing to increased distance for long-haul DCI networks. 


The metaverse mystique- Reshaping virtual interaction

For all businesses, the metaverse will eventually develop into a platform where they have to engage with their customers. A host of global brands, from Adidas to JP Morgan have launched metaverse initiatives. It offers businesses an unprecedented opportunity to connect with their customers on a deeper level – offering a Metaverse platform can potentially allow businesses and brands to develop an even deeper 360-degree view of their customers which allows for more accurate and granular segmentation analytics, enabling customer insights in ways previously unimaginable. This will facilitate the delivery of products and services with unparalleled precision, aligning them seamlessly with customer insights and expectations. The potential for businesses in the Metaverse is staggering, with McKinsey estimating its value creation potential to reach up to $5 trillion by 2030. Beyond the consumer-centric applications, industrial uses are also gaining traction. Augmented Reality (AR) and Virtual Reality (VR) tools, along with digital twin architectures, are increasingly finding applications in various industrial sectors.



Quote for the day:

''To so great things is difficult; but to command great things is more difficult.'' -- Friedrich Nietzsche

Daily Tech Digest - February 19, 2024

Why artificial general intelligence lies beyond deep learning

Decision-making under deep uncertainty (DMDU) methods such as Robust Decision-Making may provide a conceptual framework to realize AGI reasoning over choices. DMDU methods analyze the vulnerability of potential alternative decisions across various future scenarios without requiring constant retraining on new data. They evaluate decisions by pinpointing critical factors common among those actions that fail to meet predetermined outcome criteria. The goal is to identify decisions that demonstrate robustness — the ability to perform well across diverse futures. While many deep learning approaches prioritize optimized solutions that may fail when faced with unforeseen challenges (such as optimized just-in-time supply systems did in the face of COVID-19), DMDU methods prize robust alternatives that may trade optimality for the ability to achieve acceptable outcomes across many environments. DMDU methods offer a valuable conceptual framework for developing AI that can navigate real-world uncertainties. Developing a fully autonomous vehicle (AV) could demonstrate the application of the proposed methodology. The challenge lies in navigating diverse and unpredictable real-world conditions, thus emulating human decision-making skills while driving. 


Bouncing back from a cyber attack

In the case of a cyber attack, the inconceivable has already happened – all you can do now is bounce back. The big picture issue is that too often IoT (internet of things) networks are filled with bad code, poor data practices, lack of governance, and underinvestment in secure digital infrastructure. Due to the popularity and growth of IoT, manufacturers of IoT devices spring up overnight promoting products that are often constructed using lower-quality components and firmware, which can have sometimes well-known vulnerabilities exposed due to poor design and production practices. These vulnerabilities are then introduced to a customer environment increasing risk and possibly remaining unidentified. So, there’s a lot of work to do, including creating visibility over deep, widely connected networks with a plethora of devices talking to each other. All too often, IT and OT networks run on the same flat network. For these organisations, many are planning segmentation projects, but they are complex and disruptive to implement, so in the meantime companies want to understand what's going on in these environments and minimise disruption in the event of an attack.


Diversity, Equity, and Inclusion for Continuity and Resilience

As continuity professionals, the average age tends to skew older, so how do we continue to bring new people to the fold to ensure they feel like they can learn and be respected in the industry? Students need to be made aware this is an industry they can step into. Unfortunately, many already have experience seeing active shooter drills as the norm. They may have never organized one, but they have participated in many of these drills in school. Why not take advantage of that experience for the students who are interested in this field? Taking their advice could make exercising like active shooter or weather events less traumatic. Listening to their experience – doing it for at least 13 years – gives them a lot of insight from even Millennials who grew up at the forefront of school shootings, but not actively exercising what to do if it happens while in school. These future colleagues’ insights could change how we do specific exercises and events to benefit everyone. Still, there must be openness to new and fresh ideas and treating them with validity instead pushing them off due to their age and experience. Similarly, people with disabilities have always been vocal about their needs. 


AI’s pivotal role in shaping the future of finance in 2024 and beyond

As AI becomes more embedded in the financial fabric, regulators are crafting a nuanced framework to ensure ethical AI use. The Reserve Bank of India (RBI) and the Securities and Exchange Board of India (SEBI) have initiated guidelines for responsible AI adoption, emphasising transparency, accountability, and fairness in algorithmic decision-making processes. While the benefits are palpable, challenges persist. The rapid pace of AI integration demands a strategic approach to ensure a safe, financial eco-system ... The evolving nature of jobs due to AI necessitates a concerted effort towards upskilling the workforce. A McKinsey Global Institute report indicates that approximately 46% of India’s workforce may undergo significant changes in their job profiles due to automation and AI. To address this, collaborative initiatives between the government, educational institutions, and the private sector are imperative to equip the workforce with the requisite skills for the future. ... The Reserve Bank of India (RBI) and the Securities and Exchange Board of India (SEBI) have recognised the need for ethical AI use in the financial sector. Establishing clear guidelines and frameworks for responsible AI governance is crucial. 


How to proactively prevent password-spray attacks on legacy email accounts

Often with an ISP it’s hard to determine the exact location from which a user is logging in. If they access from a cellphone, often that geographic IP address is in a major city many miles away from your location. In that case, you may wish to set up additional infrastructure to relay their access through a tunnel that is better protected and able to be examined. Don’t assume the bad guys will use a malicious IP address to announce they have arrived at your door. According to Microsoft, “Midnight Blizzard leveraged their initial access to identify and compromise a legacy test OAuth application that had elevated access to the Microsoft corporate environment. The actor created additional malicious OAuth applications.” The attackers then created a new user account to grant consent in the Microsoft corporate environment to the actor-controlled malicious OAuth applications. “The threat actor then used the legacy test OAuth application to grant them the Office 365 Exchange Online full_access_as_app role, which allows access to mailboxes.” This is where my concern pivots from Microsoft’s inability to proactively protect its processes to the larger issue of our collective vulnerability in cloud implementations. 


How To Implement The Pipeline Design Pattern in C#

The pipeline design pattern in C# is a valuable tool for software engineers looking to optimize data processing. By breaking down a complex process into multiple stages, and then executing those stages in parallel, engineers can dramatically reduce the processing time required. This design pattern also simplifies complex operations and enables engineers to build scalable data processing pipelines. ...The pipeline design pattern is commonly used in software engineering for efficient data processing. This design pattern utilizes a series of stages to process data, with each stage passing its output to the next stage as input. The pipeline structure is made up of three components: The source: Where the data enters the pipeline; The stages: Each stage is responsible for processing the data in a particular way; The sink: Where the final output goes Implementing the pipeline design pattern offers several benefits, with one of the most significant benefits in efficiency of processing large amounts of data. By breaking down the data processing into smaller stages, the pipeline can handle larger datasets. The pattern also allows for easy scalability, making it easy to add additional stages as needed. 


Accuracy Improves When Large Language Models Collaborate

Not surprisingly, this idea of group-based collaboration also makes sense with large language models (LLMs), as recent research from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) is now showing. In particular, the study focused on getting a group of these powerful AI systems to work with each other using a kind of “discuss and debate” approach, in order to arrive at the best and most factually accurate answer. Powerful large language model AI systems, like OpenAI’s GPT-4 and Meta’s open source LLaMA 2, have been attracting a lot of attention lately with their ability to generate convincing human-like textual responses about history, politics and mathematical problems, as well as producing passable code, marketing copy and poetry. However, the tendency of these AI tools to “hallucinate”, or come up with plausible but false answers, is well-documented; thus making LLMs potentially unreliable as a source of verified information. To tackle this problem, the MIT team claims that the tendency of LLMs to generate inaccurate information will be significantly reduced with their collaborative approach, especially when combined with other methods like better prompt design, verification and scratchpads for breaking down a larger computational task into smaller, intermediate steps.


There's AI, and Then There's AGI: What You Need to Know to Tell the Difference

For starters, the ability to perform multiple tasks, as an AGI would, does not imply consciousness or self-will. And even if an AI had self-determination, the number of steps required to decide to wipe out humanity and then make progress toward that goal is too many to be realistically possible. "There's a lot of things that I would say are not hard evidence or proof, but are working against that narrative [of robots killing us all someday]," Riedl said. He also pointed to the issue of planning, which he defined as "thinking ahead into your own future to decide what to do to solve a problem that you've never solved before." LLMs are trained on historical data and are very good at using old information like itineraries to address new problems, like how to plan a vacation. But other problems require thinking about the future. "How does an AI system think ahead and plan how to eliminate its adversaries when there is no historical information about that ever happening?" Riedl asked. "You would require … planning and look ahead and hypotheticals that don't exist yet … there's this big black hole of capabilities that humans can do that AI is just really, really bad at."


Metaverse and the future of product interaction

As the metaverse continues to evolve, so must the approach to product design. This includes considering how familiar objects can be repurposed as functional interface elements in a virtual environment. Additionally, understanding the dynamics of group interactions in virtual spaces is crucial. Designers must anticipate these trends and adapt their designs accordingly, ensuring that products remain relevant and engaging in the ever-changing landscape of the metaverse. In India, the metaverse presents significant opportunities for businesses to redefine consumer experiences. It opens up possibilities for more interactive, personalised, and adventurous engagements with customers. This not only increases customer engagement and loyalty but also creates new avenues for value exchange and revenue streams. The metaverse, with its potential to impact diverse sectors like communications, retail, manufacturing, education, and banking, is poised to be a game-changer in the Indian market. ... As the metaverse continues to expand its reach and influence, businesses and designers in India and around the world must evolve to meet the demands of this new digital era.


Build trust to win out with genAI

Businesses need to adopt ‘responsible technology’ practices, which will give them a powerful lever that enables them to deploy innovative genAI solutions while building trust with consumers. Responsible tech is a philosophy that aligns an organization’s use of technology to both individuals’ and society’s interests. It includes developing tools, methodologies, and frameworks that observe these principles at every stage of the product development cycle. This ensures that ethical concerns are baked in at the outset. This approach is gaining momentum, as people realize how technologies such as genAI, can impact their daily lives. Even organizations such as the United Nations are codifying their approach to responsible tech. Consumers urgently want organizations to be responsible and transparent with their use of genAI. This can be a challenge because, when it comes to transparency, there are a multitude of factors to consider, including everything from acknowledging AI is being used to disclosing what data sources are used, what the steps were taken to reduce bias, how accurate the system is, or even the carbon footprint associated with the genAI system.



Quote for the day:

"Entrepreneurs average 3.8 failures before final success. What sets the successful ones apart is their amazing persistence." -- Lisa M. Amos