Daily Tech Digest - April 22, 2024

AI governance for cloud cost optimisation: Best practices in managing expenses in general AI deployment

AI-enabled cost optimization solutions, in contrast to static, threshold-based tools can actively detect and eliminate idle and underused resources, resulting in significant cost reductions. In addition, they are equipped to foresee and avert possible problems like lack of resources and performance difficulties, guaranteeing continuous and seamless operations. They can also recognize cost anomalies, swiftly respond to them, and even carry out pre-planned actions. This method eliminates the need for continual manual intervention while ensuring effective operations. ... Cloud misconfiguration or improper utilization of cloud resources are frequently the cause of computing surges. One possible scenario is when a worker uses a resource more frequently than necessary. By determining the root cause, organizations can contribute to reducing unnecessary cloud resource utilization. By employing AI-enabled cost optimization tools, businesses can improve consumption and identify irregularities to minimize overspending. Moreover, these tools reduce the time-consuming effort of manually screening and evaluating behavior. 


The first steps of establishing your cloud security strategy

The purpose of CIS Control 3 is to help you create processes for protecting your data in the cloud. Consumers don’t always know that they’re responsible for cloud data security, which means they might not have adequate controls in place. For instance, without proper visibility, cloud consumers might be unaware that they’re leaking their data for weeks, months, or even years. CIS Control 3 walks you through how to close this gap by identifying, classifying, securely handling, retaining, and disposing of your cloud-based data, as shown in the screenshot below. ... In addition to protecting your cloud-based data, you need to manage your cloud application security in accordance with CIS Control 16. Your responsibility in this area applies to applications developed by your in-house teams and acquired from external product vendors. To prevent, detect, and remediate vulnerabilities in your cloud-based applications, you need a comprehensive program that brings together people, processes, and technology. Continuous Vulnerability Management, as discussed in CIS Control 7, sits at the heart of this program. 


Want to become a successful data professional? Do these 5 things

"I think tech can be quite scary from the outside, but no one knows everything in technology," she says. Young professionals will quickly learn everyone has gaps in their digital knowledge base -- and that's a good thing because people in IT want to learn more. "That's the brilliant thing about it. Even if you're an expert in one thing, you'll know next to nothing in a different part." Whitcomb says the key to success for new graduates is seeing every obstacle as an opportunity. "Going in from square one is quite intimidating," she says. "But if you have that mindset of, 'I want to learn, I'm willing to learn, and I can think logically' then you'll be great. So, don't be put off because you don't know how to code at the start." ... The prevalence of data across modern business processes means interest in technology has peaked during the past few years. However, some young people might still see technology as a dry and stuffy career -- and Whitcomb says that's a misconception. "That always bugs me a little bit. I think IT is incredibly creative. The things you can do with tech are amazing," she says.


Is Scotland emerging as the next big data center market?

From a sustainability perspective, Scotland is second to none. While the weather may have a miserable reputation, it is nonetheless ideal for data centers. A cooler climate means we can rely more on nature to keep equipment at optimum temperatures, with less of a need for energy-intensive air conditioning. Scotland’s energy mix also consists of a much higher than average share of renewables. The carbon intensity – a measure of how clean electricity is – of Scotland’s grid is well ahead of the other European countries and even compares favorably against other parts of the UK. Relocating a data center from Warsaw to Glasgow could cut the carbon intensity of its energy by as much as 99 percent. Scotland’s carbon intensity is one-quarter of London’s, meaning moving a 200-rack facility could save more than six million kilograms of CO2 equivalent, equal to over 14 million miles traveled by the average mid-sized car. ... There is a strong cost imperative too. Relocating to Scotland could save up to 70 percent in operational costs compared to other markets, partially thanks to the cooler climate. The cost of land is another major factor – data center-ready land in Glasgow can cost up to 90 percent less than Slough, Greater London.


Winning Gen AI Race with Your Custom Data Strategy

Essentially, the Data Fabric Strategy involves a comprehensive plan to seamlessly integrate diverse data sources, processing capabilities, and AI algorithms to enable the creation, training, and deployment of generative AI models. It provides a unified platform approach for the Collection of Data, Organizing the data, and Allowing good Governance over data, facilitating the development of winning AI Products. The Product Manager establishes the North Star Metrics (NSM) for the product according to the business context, with the most prevalent and crucial NSM being User Experience, contingent upon three pivotal factors. ... Embarking on the journey of implementing a Data Fabric Strategy, the pinnacle stage lies in sculpting the Solution Architecture tailored for Gen AI product. While the accountability rests with the Product Manager, the creation of this vital blueprint falls under the purview of the Architect. In dissecting the intricacies of Data Fabric solutions, we encounter two fundamental components: the user-facing interactions and the robust Data Processing Pipeline.


Disciplined entrepreneurship: 6 questions for startup success

Identify key assumptions to be tested before you begin to make heavy investments in product development. It will be faster and much less costly now to test the assumptions and allow you to preserve valuable resources and adjust as needed. Test each of the individual assumptions you have identified. This scientific approach will allow you to understand which assumptions are valid and which ones are not and then adjust when the cost of doing so is much lower and can be done much faster. Define the minimal product you can use to start the iterative customer feedback loop —where the customer gets value from the product and pays for it. You must reduce the variables in the equation to get the customer feedback loop started with the highest possibility of success with simultaneously the most efficient use of your scarce resources. ... Calculate the annual revenues from the top follow-on markets after you are successful in your beachhead market. It shows the potential that can come from winning your beachhead and motivates you to do so quickly and effectively.


7 innovative ways to use low-code tools and platforms

One of the 7 Rs of cloud app modernization is to replatform components rather than lift and shift entire applications. One replatforming approach is maintaining back-end databases and services while using low-code platforms to rebuild front-end user experiences. This strategy can also enable the development of multiple user experiences for different business purposes, a common practice performed by independent software vendors (ISVs) who build one capability and tailor it to multiple customers. Deepak Anupalli, cofounder and CTO of WaveMaker, says, “ISVs recast their product UX while retaining all their past investment in infrastructure, back-end microservices, and APIs. ... Another area of innovation to consider is when low-code components can replace in-house commodity components. Building a rudimentary register, login, and password reset capability is simple, but today’s security requirements and user expectations demand more robust implementations. Low-code is one way to upgrade these non-differentiating capabilities without investing in engineering efforts to get the required experience, security, and scalability.


Beyond 24/7: How Smart CISOs are Rethinking Threat Hunting

To combat this phenomenon, CISOs are rethinking their approach as the model of 24/7 in-house threat hunting is no longer sustainable for many businesses. Instead, we see an increasing focus on value-driven security solutions that make their own tools work better, harder, and more harmoniously together. This means prioritizing tools that leverage telemetry, deliver actionable insights and integrate into existing stacks seamlessly – and don’t just create another source of noise. This is where Managed Detection and Response (MDR) services come in. Managed Detection and Response (MDR) services offer a strategic solution to these challenges. MDR providers employ experienced security analysts who monitor your environment 24/7, leveraging advanced threat detection and analysis tools and techniques. ... Start by evaluating your current security posture. Identify your organization’s specific security needs and vulnerabilities. This helps you understand how MDR can benefit you and what features are most important. Don’t be swayed by brand recognition alone. While established players offer strong solutions, smaller MDR providers can be equally adept, often with greater flexibility and potentially lower costs.


A time to act: Why Indian businesses need to adopt AI agents now

Currently, the conversational AI chatbots in the market listen to what businesses want and deliver exactly that. This is only one aspect of what generative AI can do. AI Agents take it a step further by performing the same functions as conversational AI bots but with the added capability of acting intuitively. For example, while planning a vacation it can complete an expense report without being asked or plan a travel itinerary, book tickets and more. However, there is more than just one use case and these assistants can be used across industries. ...  Imagine a scenario where an AI Agent is deployed in tandem with the IoT device monitoring signals that indicate when a machine needs to replace parts. AI Agents can be used to automatically order parts, and have them shipped to specific locations along with recommended times for the technician to arrive for the maintenance work. All of this with no downtime. This is merely scratching the surface of what AI Agents can do. Built on Large Actions Models, AI Agents can automate a wide range of tasks across various industries. 


6 security items that should be in every AI acceptable use policy

Corporate policies need to include a security item that deals with protecting the sensitive data that the AI system uses. By including a security item that addresses how the AI system uses sensitive data, organizations can promote transparency, accountability, and trust in their AI practices while safeguarding individuals’ privacy and rights. “So if an AI system is being used to assess whether somebody is going to be getting insurance, or healthcare, or a job, that information needs to be used carefully,” says Nader Henein, research vice president, privacy and data protection at Gartner. Companies need to ask what information is going to be given to those AI systems, and what kind of care is going to be taken when they use that data to make sensitive decisions, Henein says. The AI AUP needs to establish protocols for handling sensitive data to safeguard privacy, comply with regulations, manage risks, and maintain trust with users and others. These protocols ensure that sensitive data, such as personal information or proprietary business data, is protected from unauthorized access or misuse.



Quote for the day:

''Your most unhappy customers are your greatest source of learning.'' -- Bill Gates

Daily Tech Digest - April 19, 2024

Cloud cost management is not working

The Forrester report illuminates significant visibility challenges when using existing CCMO tools. Tracking expenses across different cloud activities, such as data management, egress charges, and application integration, remains a challenge. Finops is normally on the radar, but these enterprises have yet to adopt useful finops practices, with most programs either nonexistent or not yet off the ground, even if funded. Then there’s the fact that enterprises are not good at using these tools yet, and they seem to add more cost with little benefit. The assumption is that they will get better and costs will get under control. However, given the additional resource needs for AI deployments, improvements are not likely to occur for years. At the same time, there is no plan to provide IT with additional funding, and many companies are attempting to hold the line on spending. Despite these challenges, getting cloud spending under control continues to be a priority, even if results do not show that. This means major fixing needs to be done at the architecture and integration level, which most in IT view as overly complex and too expensive to fix. 


Why Selecting the Appropriate Data Governance Operating Model Is Crucial

When deciding on the data governance operating model, you cannot simply pick one approach without evaluating the benefits each one offers. You need to weigh the potential benefits of centralized and decentralized governance models before making a decision. If you find that the benefits of centralizing your governance operations exceed those of a decentralized model by at least 20%, then it’s best to centralize. With a centralized governance model, you can bridge the skills gap, enjoy consistent outcomes across all business units, easily report on operations, ensure executive buy-in at the C-level, and plan for effectiveness in continuous feedback elicitation, improvements, and change management. However, the downside is that it often leads to operation rigidity, which reduces motivation among mid-level managers, and bureaucracy often outweighs the benefits. It’s important to consider socio-cultural aspects when formulating your operating model, as they can significantly influence the success of your organization.


5 Steps Toward Military-Grade API Security

When evaluating client security, you must address environment-specific threats. In the browser, military grade starts with ensuring the best protections against token theft, where malicious JavaScript threats, also known as cross-site scripting (XSS), are the biggest concern. To reduce the impact of an XSS exploit, it is recommended to use the latest and most secure HTTP-only SameSite cookies to transport OAuth tokens to your APIs. Use a backend-for-frontend (BFF) component to issue cookies to JavaScript apps. The BFF should also use a client credential when getting access tokens. ... A utility API then does the cookie issuing on behalf of its SPA without adversely affecting your web architecture. In an OAuth architecture, clients obtain access tokens by running an OAuth flow. To authenticate users, a client uses the OpenID Connect standard and runs a code flow. The client sends request parameters to the authorization server and receives response parameters. However, these parameters can potentially be tampered with. For example, an attacker might replay a request and change the scope value in an attempt to escalate privileges.


Break Security Burnout: Combining Leadership With Neuroscience

The problem for cybersecurity pros is that they often get stuck in a constant state of psychological fight-or-flight response pattern due to the constant stress cycle of their jobs, Coroneos explains. iRest is a training that helps them switch out of this cycle to bring them to a deeper state of relaxation to reset that fight-or-flight response. This will help the brain switch off, so it is not constantly creating stress not only in the workplace but throughout their everyday lives, thus creating burnout, he says. "We need to get them into a position where they can come into a proper relationship into their subconscious," Coroneos says, adding that so far cybersecurity professionals who have experienced the training — which Cybermindz is currently piloting— report they are sleeping better and making clearer decisions after only a few sessions of the program. Indeed, while burnout remains a serious problem, the message Coroneos and Williams ultimately want to convey is one of hope that there are solutions to solve the burnout problem currently facing cybersecurity professionals, and that the enormous pressures these dedicated professionals face is not being overlooked.


Unlocking Customer Experience: The Critical Role of Your Supply Chain

It is crucial to find a partner that understands that digital transformation alone is not enough. Unlike point solution vendors who solve isolated problems, prioritize a partner that focuses on three main areas: people, processes, and systems. A good partner will begin its approach by understanding what is actually happening with mission-critical processes in the supply chain like inbound and outbound logistics, supplier management, customer service, help desk, and financial processes. Understanding these root causes helps identify opportunities for improvement and automation. Analyzing data and feedback reveals pain points, bottlenecks, and inefficiencies within each process. Utilizing process mapping and performance metrics helps pinpoint areas ripe for enhancement. Automation technologies, like AI and machine learning, streamline repetitive tasks, reducing errors and enhancing efficiency. By continuously assessing and optimizing these processes, businesses can improve responsiveness, reduce costs, and enhance overall supply chain performance, ultimately driving customer satisfaction and competitive advantage.


AI migration woes: Brain drain threatens India’s tech future

To address the challenge of talent migration, the biggest companies in India must work together to democratise access to resources and opportunities within its tech ecosystem. One key aspect of this approach involves fostering a culture of open collaboration among key stakeholders, including top-tier venture capitalists (VCs), corporates, academia and leading startups because no single entity can drive AI innovation in isolation. By creating a collaborative ecosystem where information is freely shared and resources are pooled, can level the playing field and provide equal opportunities for aspiring AI professionals across the nation. This could involve the establishment of platforms dedicated to knowledge exchange, networking events and cross-sector partnerships aimed towards accelerating innovation. ... In addition to these fundamental elements, the tech ecosystem in India must also prioritise accessibility and affordability in the adoption of AI-integrated technologies. The future-ready benefits of AI should be democratised, reaching not only large brands but also small and medium-sized enterprises (SMEs), startups and grassroots organisations. 


Are you a toxic cybersecurity boss? How to be a better CISO

Though most CISOs treat their employees fairly, CISOs are human beings — with all the frailties, quirks, and imperfections of the human condition. But CISOs behaving badly expose their own organizations to huge risks. ... One of the thorniest challenges of a toxic CISO is that the person causing the problem is also the one in charge, making them susceptible to blind spots about their own behavior. Nicole L. Turner, a specialist in workplace culture and leadership coaching, got a close-up look at this type of myopia when a top exec (in a non-security role) recently hired her to deliver leadership training to the department heads at his company. “He felt like they needed training because he could tell some things were going on with them, that they were burned out and overwhelmed. But as I’m training them, I notice these sidebar conversations [among his staff] that he was the problem, more so than the work itself. It was just such an ironic thing and he didn’t know,” recounts Turner, owner and chief culture officer at Nicole L. Turner Consulting in Washington, D.C. There’s also some truth to the adage that it’s lonely at the top, especially in a hypercompetitive corporate environment.


Who owns customer identity?

Onboarding users securely but still seamlessly is a constant conflict in many types of businesses, from retail, insurance, to fintech. ... If you are from a regulated industry, MFA becomes important. Make it a risk-based MFA, however, to reduce undue friction. If your business offers a D2C or B2C product or service, seamless onboarding is your number one priority. If user friction is the primary reason for your CIAM initiative, the product team or engineering team should take the lead and bring other teams along. If MFA is the main use case, the CISO should lead the discussions and then bring other teams along. ... If testing or piloting is possible, do so. Experimentation is very valuable in a CIAM context. Whether you are moving to a new CIAM solution, trying a new auth method, or changing your onboarding process in any other way, run a pilot or an A/B test first. Starting small, measuring the results, and taking longer-term decisions accordingly is a healthy cycle to follow when it comes to customer identity processes.


GenAI: A New Headache for SaaS Security Teams

The GenAI revolution, whose risks remain in the realm of the unknown unknown, comes at a time when the focus on perimeter protection is becoming increasingly outdated. Threat actors today are increasingly focused on the weakest links within organizations, such as human identities, non-human identities, and misconfigurations in SaaS applications. Nation-state threat actors have recently used tactics such as brute-force password sprays and phishing to successfully deliver malware and ransomware, as well as carry out other malicious attacks on SaaS applications. Complicating efforts to secure SaaS applications, the lines between work and personal life are now blurred when it comes to the use of devices in the hybrid work model. With the temptations that come with the power of GenAI, it will become impossible to stop employees from using the technology, whether sanctioned or not. The rapid uptake of GenAI in the workforce should, therefore, be a wake-up call for organizations to reevaluate whether they have the security tools to handle the next generation of SaaS security threats.


What CIOs Can Learn from an Attempted Deepfake Call

Defending against deepfake threats takes a multifaceted strategy. “There's a three-pronged approach where there's education, there's culture, and there's technology,” says Kosak. NINJIO focuses on educating people on cybersecurity risks, like deepfakes, with short, engaging videos. “If you can deepfake a voice and a face or an image based on just a little bit of information or maybe three to four seconds of that voice tone, that's sending us down a path that is going to require a ton of verification and discipline from the individual’s perspective,” says McAlmont. He argues that an hour or two of annual training is insufficient as threats continue to escalate. More frequent training can help increase employee vigilance and build that culture of talking about cybersecurity concerns. When it comes to training around deepfakes, awareness is key. These threats will continue to come. What does a deepfake sound or look like? (Pretty convincing in many cases.) What are some of the common signs that the person you hear or see isn’t who they say they are?



Quote for the day:

“A real entrepreneur is somebody who has no safety net underneath them.” -- Henry Kravis

Daily Tech Digest - April 17, 2024

Are You Delivering on Developer Experience?

A critical concept in modern developer experience is the “inner loop” of feedback on code changes. When a developer has a quick and familiar system to get feedback on their code, it encourages multiple cycles of testing and experimentation before code is deployed to a final test environment or production. The “outer loop” of feedback involves a more formal process of proposing tests, merging changes, running integration and then end-to-end tests. When problems are found on the outer loop, the result is larger, slower deployments with developers receiving feedback hours or days after they write code. Outer loop testing can still be testing that is automated and kicked off by the original developer, but another common issue with feedback that comes later in the release cycle is that it comes from human testers or others in the release process. This often results in feedback that is symptomatic rather than identifying root causes. When feedback isn’t clear, it’s as bad or worse than unclear requirements: Developers can’t work quickly on problems they haven’t diagnosed, and they’ve often moved on to other projects in the time between deployment and finding an issue. 


The digital tapestry: Safeguarding our future in a hyper-connected world

Data centers, acting as the computational hearts, power grids as the electrical circulatory system, and communication networks as the interconnected neural pathways – these elements form the infrastructure that facilitates the flow of information, the very essence of modern life. But like any complex biological system, they have vulnerabilities. A sophisticated cyberattack can infiltrate a data center, disrupting critical services. A natural disaster can sever communication links, isolating entire regions. These vulnerabilities highlight the paramount importance of resilience. We must design and maintain infrastructure that can withstand these disruptions, adapt to changing demands, and recover swiftly from setbacks. This intricate dance becomes even more critical as we attempt to seamlessly integrate revolutionary technologies like artificial intelligence (AI) into the fabric of our critical infrastructure. As we know, AI offers incredible potential, functioning like a highly sophisticated adaptive learning algorithm within the data center and critical infrastructure network. 


5 Strategies To Get People To Listen To You At Work

Credibility is currency at work. It is built over time, not by title or position but through displays of integrity, expertise, and knowledge. To be considered credible we need to have something valuable to say, and we can hone that by investing in continuous learning, staying abreast of industry trends, and demonstrating an ability to contribute to the success of the team through our actions and contributions. ... Tailor your message to resonate with the concerns, interests, and communication preferences of those you’re addressing. Speaking to executives, for instance, demands clarity, brevity, and alignment with strategic goals. Anticipate their probing questions about risks and opportunities and emphasize the impact on the bottom line. ... When people come to speak with you, silence your phone and computer and give them your full attention. Ask them follow-up questions, take notes, and adopt a mindset of learning. By demonstrating genuine interest and appreciation for your team members’ viewpoints, you will foster a culture of collaboration and mutual respect that encourages others to listen to you in turn.


Thinking outside the code: How the hacker mindset drives innovation

The hacker mindset has a healthy disrespect for limitations. It enjoys challenging the status quo and looking at problems with a “what if” mentality: “what if a malicious actor did this?” or “what if we look at data security from a different angle? This pushes tech teams to think outside the code, and explore more unconventional solutions. In its essence, hacking is about creating new technologies or using existing technologies in unexpected ways. It’s about curiosity, the pursuit for knowledge, wondering “what else can this do?” I can relate this to movies like The Matrix; it’s about not accepting reality as a “read-only” situation. It’s about changing your technical reality, learning which software elements can be manipulated, changed or re-written completely. ... Curiosity is one of the most important elements to fuel growth. Organizations with a “question everything” attitude will be the first to adapt to new threats; first to seize opportunities; and last to become obsolete. For me, ideal organizations are tech-driven playgrounds that encourage experimentation and celebrate failure as progress.


SAS Viya and the pursuit of trustworthy AI

Ensuring ethical use of AI starts before a model is deployed—in fact, even before a line of code is written. A focus on ethics must be present from the time an idea is conceived and persist through the research and development process, testing, and deployment, and must include comprehensive monitoring once models are deployed. Ethics should be as essential to AI as high-quality data. It can start with educating organizations and their technology leaders about responsible AI practices. So many of the negative outcomes outlined here arise simply from a lack of awareness of the risks involved. If IT professionals regularly employed the techniques of ethical inquiry, the unintended harm that some models cause could be dramatically reduced. ... Because building a trustworthy AI model requires a robust set of training data, SAS Viya is equipped with strong data processing, preparation, integration, governance, visualization, and reporting capabilities. Product development is guided by the SAS Data Ethics Practice (DEP), a cross-functional team that coordinates efforts to promote the ideals of ethical development—including human centricity and equity—in data-driven systems. 


From skepticism to strength: The evolution of Zero Trust

The core concepts are the same. The principle of least privilege and assume breach mentality are still key. For example, backup management systems must be isolated on the network so that no unauthenticated users can access it. Likewise, the backup storage system itself must be isolated. Immutability is also key. Having backup data that cannot be changed or tampered with means if repositories are reached by attacks like ransomware, they cannot be affected by its malware. Assuming a breach also means businesses should not implicitly ‘trust’ their backups after an attack. Having processes to properly validate the backup or ‘clean’ it before attempting system recovery is vital to ensure you are not simply restoring a still-compromised environment. The final layer of distrust is to have multiple copies of your backups – fail-safes in case one (or more) are compromised. The best practice is to have three copies of your backup, two stored on different media types, one stored onsite, and one kept offline. With these layers of resilience, you can start to consider your backup as Zero Trust. With Zero Trust Data Resilience, just like zero trust, it is a journey. You cannot implement it all at once. 


Where in the world is your AI? Identify and secure AI across a hybrid environment“

Your AI strategy is as good as your data strategy,” says Brad Arkin, chief trust officer at Salesforce. “Organizations adopting AI must balance trust with innovation. Tactically, that means companies need to do their diligence — for example, taking the time to classify data and implement specific policies for AI use cases.” ... Threat vectors like the DNS or APIs connecting to backend or cloud-based data lakes or repositories, particularly over IoT (internet of things), constitute two major vulnerabilities to sensitive data, adds Julie Saslow Schroeder, a chief legal officer and pioneer in AI and data privacy laws and SaaS platforms. “By putting up insecure chatbots connecting to vulnerable systems, and allowing them access to your sensitive data, you could break every global privacy regulation that exists without understanding and addressing all the threat vectors.” ... Arkin says security is a shared responsibility between cloud/SaaS provider and enterprise customers, emphasizing optional detection controls like event monitoring and audit trails that help customers gain insights into who’s accessing their data, for what purpose, and the type of processing being done.


Where Are You on the Cybersecurity Readiness Index? Cisco Thinks You’re Probably Overconfident

As we noted, cybersecurity readiness is alarmingly low across the board. However, that’s not reflected in the confidence of the companies that responded to the Cisco study. Some 80% of respondents, down slightly from last year, say they’re moderate to very confident in their ability to stay resilient. Cisco believes their confidence is misplaced and that they have not assessed the scale of their challenges. I agree that confidence will only get companies in trouble. With cyber security, it’s best to maintain a healthy paranoia and plan for the worst. No one thinks they’ll get in a car accident from texting on their phones until it happens. That’s when people change their behavior. There are many other revealing takeaways in this nearly 30-page report. But there’s nothing more alarming that—even after decades of having it driven home and having boardrooms and c-suites supposedly buy in—cyber threats are still taken too lightly. There are gaps in maturity, coverage, talent, and self-awareness. The underlying cause of these gaps is hard to pin down. But it likely comes from how we can all hold contradictory beliefs in our heads simultaneously. We can all freely acknowledge that cybersecurity is a significant threat.


The Global Menace of the Russian Sandworm Hacking Team

The group's ambitions have long been global: "The group’s readiness to conduct cyber operations in furtherance of the Kremlin’s wider strategic objectives globally is ingrained in its mandate." Past attacks include a 2016 hack against the Democratic National Committee, the 2017 NotPetya wave of encrypting software and the 2018 unleashing of malware known as Olympic Destroyer that disrupted the winter Olympics being held in South Korea. The group has recently turned to mobile devices and networks including a 2023 attempt to deploy malware programmed to spy on Ukrainian battlefield management apps. According to Mandiant, the group is directing and influencing the development of "hacktivist" identities in a bid to augment the psychological effects of its operations. Especially following the February 2022 invasion, Sandworm has used a series of pro-Russian Telegram channels including XakNet Team and Solntsepek to claim responsibility for hacks and leak stolen information. Sandworm also appears to have a close relationship with CyberArmyofRussia_Reborn.


How AI is Transforming Traditional Code Review Practices

The most effective use of AI in software development marries its strengths with the irreplaceable intuition, creativity, and experience of human developers. This synergistic approach leverages AI for what it does best — speed, consistency, and automation — while relying on humans for strategic decision-making and nuanced understanding that AI (currently) cannot replicate. AI can now be used to address the challenges of traditionally human-centric process of code reviews. For example, AI can scan entire code repositories and workflow systems to understand the context in which the codebase runs. ... Future advancements will see AI evolve into the role of a collaborator, capable of more complex reasoning, offering design suggestions, best practices, and even predicting or simulating the impact of code changes on software functionality and performance. AI can provide deeper insights into code quality, offer personalized feedback, and play a key role in installing a culture of learning and improvement within development teams.



Quote for the day:

"It is in your moments of decision that your destiny is shaped." -- Tony Robbins

Daily Tech Digest - April 16, 2024

How to Build a Successful AI Strategy for Your Business in 2024

With a solid understanding of AI technology and your organization’s priorities, the next step is to define clear objectives and goals for your AI strategy. Focus on identifying the problems that AI can solve most effectively within your organization. These objectives should be specific, measurable, achievable, relevant, and time-bound (SMART). ... By setting well-defined objectives, you can create a targeted AI strategy that delivers tangible results and aligns with your overall business priorities. An AI implementation strategy often requires specialized expertise and tools that may not be available in-house. To bridge this gap, identify potential partners and vendors who can provide the necessary support for your AI strategy.Start by researching AI and machine learning companies that have a proven track record of working in your industry. When evaluating potential partners, consider factors such as their technical capabilities, the quality of their tools and platforms, and their ability to scale as your AI needs grow. Look for vendors who offer comprehensive solutions that cover the entire AI lifecycle, from data preparation and model development to deployment and monitoring.


Internet can achieve quantum speed with light saved as sound

When transferring information between two quantum computers over a distance—or among many in a quantum internet—the signal will quickly be drowned out by noise. The amount of noise in a fiber-optic cable increases exponentially the longer the cable is. Eventually, data can no longer be decoded. The classical Internet and other major computer networks solve this noise problem by amplifying signals in small stations along transmission routes. But for quantum computers to apply an analogous method, they must first translate the data into ordinary binary number systems, such as those used by an ordinary computer. This won't do. Doing so would slow the network and make it vulnerable to cyberattacks, as the odds of classical data protection being effective in a quantum computer future are very bad. "Instead, we hope that the quantum drum will be able to assume this task. It has shown great promise as it is incredibly well-suited for receiving and resending signals from a quantum computer. So, the goal is to extend the connection between quantum computers through stations where quantum drums receive and retransmit signals, and in so doing, avoid noise while keeping data in a quantum state," says Kristensen.


Better application networking and security with CAKES

A major challenge in enterprises today is keeping up with the networking needs of modern architectures while also keeping existing technology investments running smoothly. Large organizations have multiple IT teams responsible for these needs, but at times, the information sharing and communication between these teams is less than ideal. Those responsible for connectivity, security, and compliance typically live across networking operations, information security, platform/cloud infrastructure, and/or API management. These teams often make decisions in silos, which causes duplication and integration friction with other parts of the organization. Oftentimes, “integration” between these teams is through ticketing systems. ... Technology alone won’t solve some of the organizational challenges discussed above. More recently, the practices that have formed around platform engineering appear to give us a path forward. Organizations that invest in platform engineering teams to automate and abstract away the complexity around networking, security, and compliance enable their application teams to go faster.


AI set to enhance cybersecurity roles, not replace them

Ready or not, though, AI is coming. That being the case, I’d caution companies, regardless of where they are on their AI journey, to understand that they will encounter challenges, whether from integrating this technology into current processes or ensuring that staff are properly trained in using this revolutionary technology, and that’s to be expected. As a cloud security community, we will all be learning together how we can best use this technology to further cybersecurity. ... First, companies need to treat AI with the same consideration as they would a person in a given position, emphasizing best practices. They will also need to determine the AI’s function — if it merely supplies supporting data in customer chats, then the risk is minimal. But if it integrates and performs operations with access to internal and customer data, it’s imperative that they prioritize strict access control and separate roles. ... We’ve been talking about a skills gap in the security industry for years now and AI will deepen that in the immediate future. We’re at the beginning stages of learning, and understandably, training hasn’t caught up yet.


Why employee recognition doesn't work: The dark side of boosting team morale

Despite the importance of appreciation, many workplaces prioritise performance-based recognition, inadvertently overlooking the profound impact of genuine appreciation. This preference for recognition over appreciation can lead to detrimental outcomes, including conditionality and scarcity. Conditionality in recognition arises from its link to past achievements and performance outcomes. Employees often feel pressured to outperform their peers and surpass their past accomplishments to receive recognition, fostering a hypercompetitive work environment that undermines collaboration and teamwork. Furthermore, the scarcity of recognition exacerbates this issue, as tangible rewards such as bonuses or promotions are limited. In this competitive landscape, employees may feel undervalued, leading to disengagement and disillusionment. To foster an inclusive and supportive workplace culture, organisations must recognise the intrinsic value of appreciation alongside performance-based recognition. Embracing appreciation cultivates a culture of gratitude, empathy, and mutual respect, strengthening interpersonal connections and boosting employee morale.


Improving decision-making in LLMs: Two contemporary approaches

Training LLMs in context-appropriate decision-making demands a delicate touch. Currently, two sophisticated approaches posited by contemporary academic machine learning research suggest alternate ways of enhancing the decision-making process of LLMs to parallel those of humans. The first, AutoGPT, uses a self-reflexive mechanism to plan and validate the output; the second, Tree of Thoughts (ToT), encourages effective decision-making by disrupting traditional, sequential reasoning. AutoGPT represents a cutting-edge approach in AI development, designed to autonomously create, assess and enhance its models to achieve specific objectives. Academics have since improved the AutoGPT system by incorporating an “additional opinions” strategy involving the integration of expert models. This presents a novel integration framework that harnesses expert models, such as analyses from different financial models, and presents it to the LLM during the decision-making process. In a nutshell, the strategy revolves around increasing the model’s information base using relevant information. 


Unpacking the Executive Order on Data Privacy: A Deeper Dive for Industry Professionals

For privacy professionals, the order underscores the ongoing challenge of protecting sensitive information against increasingly sophisticated threats. That’s important, and shouldn’t be overlooked. Yet the White House has admitted that this order isn’t a silver bullet for all the nation’s data privacy challenges. That candor is striking. It echoes a sentiment familiar to many of us in the industry: the complexities of protecting personal information in the digital age cannot be fully addressed through singular measures against external threats. Instead, this task requires a long-term, thoughtful, multi-faceted approach – one that also confronts the internal challenges to data privacy posed by Big Tech, domestic data brokers, and foreign governments that exist outside of the designated “countries of concern” category. ... The extensive collection, usage, and sale of personal data by domestic entities—including but not limited to Big Tech companies, data brokers, and third-party vendors—poses significant risks. These practices often lack transparency and accountability, fueling privacy breaches, identity theft, and eroding public trust and individual autonomy.


10 tips to keep IP safe

CSOs who have been protecting IP for years recommend doing a risk and cost-benefit analysis. Make a map of your company’s assets and determine what information, if lost, would hurt your company the most. Then consider which of those assets are most at risk of being stolen. Putting those two factors together should help you figure out where to best spend your protective efforts (and money). If information is confidential to your company, put a banner or label on it that says so. If your company data is proprietary, put a note to that effect on every log-in screen. This seems trivial, but if you wind up in court trying to prove someone took information they weren’t authorized to take, your argument won’t stand up if you can’t demonstrate that you made it clear that the information was protected. ... Awareness training can be effective for plugging and preventing IP leaks, but only if it’s targeted to the information that a specific group of employees needs to guard. When you talk in specific terms about something that engineers or scientists have invested a lot of time in, they’re very attentive. As is often the case, humans are often the weakest link in the defensive chain. 


Types of Data Integrity

Here are a few data integrity issues and risks many organizations face: Compromised hardware: Power outages, fire sprinklers, or a clumsy person knocking a computer to the floor are examples of situations that can cause the loss of vital data or its corruption. Security considers compromised hardware to be hardware that has been hacked. Cyber threats: Cyber security attacks – phishing attacks, malware – present a serious threat to data integrity. Malicious software can corrupt or alter critical data within a database. Additionally, hackers gaining unauthorized access can manipulate or delete data. If changes are made as a result of unauthorized access, it may be a failure in data security. ... Human error: A significant source of data integrity problems is human error. Mistakes that are made during manual entries can produce inaccurate or inconsistent data that then gets stored in the database. Data transfer errors: During the transfer of data, data integrity can be compromised. Transfer errors can damage data integrity, especially when moving massive amounts of data during extract, transform, and load processes, or when moving the organization’s data to a different database system.


Sisense Breach Highlights Rise in Major Supply Chain Attacks

Many of the details of the attack are not yet clear, but the breach may have exposed hundreds of Sisense's prominent customers to a supply chain attack that gave hackers a backdoor into the company's customer networks, a CISA official told Information Security Media Group. Experts said the attack suggests trusted companies are still failing to implement proactive defensive measures to spot supply chain attacks - such as robust access controls, real-time threat intelligence and regular security assessments - at a time when organizations are increasingly reliant on interconnected ecosystems. "These types of software supply chain attacks are only possible through compromised developer credentials and account information from an employee or contractor," said Jim Routh, chief trust officer for the software security company Saviynt. The breach highlights the need for enterprises to improve their identity access management capabilities for cloud-based services and other third parties, he said. Security intelligence platform Censys published insights into the Sisense breach Friday.



Quote for the day:

"Success is the progressive realization of predetermined, worthwhile, personal goals." -- Paul J. Meyer

Daily Tech Digest - April 15, 2024

Generative AI Strategy For Enterprise

The guidelines to align with enterprise business Initiatives. Identify the business challenges that require attention. Also, understand the business benefits of AI adoption that are critical for the success of enterprise. Select the targeted use cases and perform the Proof of Concepts (POC) that can deliver desired business and operational outcomes. AI use cases should not be viewed in isolation. AI initiatives and technology should be integrated into existing business processes and workflows to optimize and streamline them. Build value through improved productivity, growth, and new business models. ... Prioritize GenAI usecase initiatives based on highest potential value and feasibility to execute. Implement model development lifecycle that includes products and services, rigorous testing, validation, and documentation. Build Roadmap that provides a plan to deliver the identified GenAI applications by prioritizing and simplifying the actions required to deliver identified initiatives. Create processes for ongoing monitoring and auding of GenAI systems for responsible use of AI to ensure compliance with legal, ethical standards and algorithmic biases.


Do cloud-based genAI services have an enterprise future?

“Given the data gravity in the cloud, it is often the easiest place to start with training data. However, there will be a lot of use cases for smaller LLMs and AI inferencing at the edge. Also, cloud providers will continue to offer build-your-own AI platform options via Kubernetes platforms, which have been used by data scientist for years now,” Sustar said. “Some of these implementations will take place in the data center on platforms such as Red Hat OpenShift AI. Meanwhile, new GPU-oriented clouds like Coreweave will offer a third option. This is early days, but managed AI services from cloud providers will remain central to the AI ecosystem.” And while smaller LLMs are on the horizon, enterprises will still use major companies’ AI cloud services for when they need access to very large LLMs, according to Litan. Even so, more organizations will eventually be using small LLMs that run on much smaller hardware, “even as small as a common laptop. “And we will see the rise of services companies that support that configuration along with the privacy, security and risk management services that will be required,” Litan said. 


6 bad cybersecurity habits that put SMBs at risk

Cybersecurity can’t be addressed with technology alone and in many ways it’s a human problem, according to Sage. “Technology enables attacks, technology facilitates preventing attacks, technology helps with cleaning up after an attack, but that technology requires a knowledgeable human to be effective, at least for now,” they say. This also feeds into other problems, which are a lack of budget and no dedicated responsibility for cybersecurity. “These are significant challenges for SMBs, leaving them without guidance on compliance frameworks and a clear direction, and reliant on providers for support,” says Iqbal. ... Adopting good cyber hygiene habits should be a no brainer, although it can be a hit and miss. For instance, allowing the use of weak passwords is all too common, according to Iqbal. He’s also found instances where the default password for logins has not been changed or all the passwords for security servers are changed to a single password and there isn’t a separate administrative password. “The admin account is the most lucrative account threat actors are looking to compromise. It just takes one compromise and then the keys to the kingdom are flung open to all your potential threat actors,” he says.


Generative AI is coming for healthcare, and not everyone’s thrilled

While generative AI shows promise in specific, narrow areas of medicine, experts like Borkowski point to the technical and compliance roadblocks that must be overcome before generative AI can be useful — and trusted — as an all-around assistive healthcare tool. “Significant privacy and security concerns surround using generative AI in healthcare,” Borkowski said. “The sensitive nature of medical data and the potential for misuse or unauthorized access pose severe risks to patient confidentiality and trust in the healthcare system. Furthermore, the regulatory and legal landscape surrounding the use of generative AI in healthcare is still evolving, with questions regarding liability, data protection and the practice of medicine by non-human entities still needing to be solved.” Even Thirunavukarasu, bullish as he is about generative AI in healthcare, says that there needs to be “rigorous science” behind tools that are patient-facing. “Particularly without direct clinician oversight, there should be pragmatic randomized control trials demonstrating clinical benefit to justify deployment of patient-facing generative AI,” he said. 


State of the CIO, 2024: Change makers in the business spotlight

The push for innovation requires a steady hand, and CIOs are stepping in to provide guidance, including orienting the greater enterprise to the potential — and the pitfalls — of new technologies like AI. Eighty-five percent of respondents to the 2024 State of the CIO survey view the CIO as a critical change maker and a much-needed resource given the pace and scale of change, amplified by the frenzy around AI. “With all the hype of AI and the velocity at which technology is evolving, my focus as a CIO continuously and relentlessly has to be through the lens of strategy, execution, and culture,” says Sanjeev Saturru, CIO at Casey’s, the third-largest convenience store chain in the United States. ... “Eighteen months ago, AI was an interesting topic, but today, if you don’t have a plan to elevate experience via AI you are behind,” says LaQuinta. “We have a maniacal focus on maximizing the contribution of advanced intelligence, supported by AI. That could be making information available at the click of a button to help advisors be more efficient with their time or to serve clients better in a hyperpersonalized way.”


Cloned Voice Tech Is Coming for Bank Accounts

At many financial institutions, your voice is your password. Tiny variations in pitch, tone and timbre make human voices unique - apparently making them an ideal method for authenticating customers phoning for service. Major banks across the globe have embraced voice print recognition. It's an ideal security measure, as long as computers can't be trained to easily synthesize those pitch, tone and timbre characteristics in real time. They can. Generative artificial intelligence bellwether OpenAI in late March announced a preview of what it dubbed Voice Engine, technology that with a 15-second audio sample can generate natural-sounding speech "that closely resembles the original speaker." While OpenAI touted the technology for the good it could do - instantaneous language translation, speech therapy, reading assistance - critics' thoughts went immediately to where it could do harm, including in breaking that once ideal authentication method for keeping fraudsters out. It also could supercharge impersonation fraud fueling "child in trouble" and romance scams as well as disinformation.


Data pipelines for the rest of us

In some ways, Airflow is like a seriously upgraded cron job scheduler. Companies start with isolated systems, which eventually need to be stitched together. Or, rather, the data needs to flow between them. As an industry, we’ve invented all sorts of ways to manage these data pipelines, but as data increases, the systems to manage that data proliferate, not to mention the ever-increasing sophistication of the interactions between these components. It’s a nightmare, as the Airbnb team wrote when open sourcing Airflow: “If you consider a fast-paced, medium-sized data team for a few years on an evolving data infrastructure and you have a massively complex network of computation jobs on your hands, this complexity can become a significant burden for the data teams to manage, or even comprehend.” Written in Python, Airflow naturally speaks the language of data. Think of it as connective tissue that gives developers a consistent way to plan, orchestrate, and understand how data flows between every system. A significant and growing swath of the Fortune 500 depends on Airflow for data pipeline orchestration, and the more they use it, the more valuable it becomes. 


The 5 Steps to Crafting an Impactful Enterprise Architecture Communication Strategy

To successfully convey the significance of enterprise architecture within an organization, a structured and strategic approach to communication is crucial. Here’s an overview of the five pivotal steps to create an impactful enterprise architecture communication strategy: Clarify Strategic Objectives: Define clear-cut enterprise architecture objectives that align with the broader vision of the organization. ... Contextual Understanding: Assess the current state of enterprise architecture in your organization and the specific goals you seek to achieve through this communication strategy. ... Audience Insights: Segment your internal audience to understand the varying levels of EA awareness and the distinct needs across departments. ... Selecting Suitable Communication Tools: With a plethora of digital tools available, it’s essential to choose those that best align with your enterprise architecture communication goals. ... Developing the EA Communication Plan: Integrate all insights and choices into a coherent communication plan that outlines how enterprise architecture will be communicated across the organization. 


A Call for Technology Resilience

A major inflection point in application development has been the adoption of Agile. With iterative, Agile application development, an application or system is never finished. It’s continuously changing as business conditions and circumstances change. Both users and IT accept this iterative development without endpoints. On the other hand, end points (and more of them!) in IT projects also can foster technology resilience. They achieve resilience because a large project that gets interrupted by an immediate and overriding business necessity is more easily paused if it is structured as a series of mini projects that deliver incremental functionality. ... Your network goes down under a malware attack, but your network guru has just left the company for another opportunity. Do you have someone who can step in and do the work? Or, what if your DBA leaves? How long can you delay defining an AI data architecture, and will it harm the company competitively? To achieve IT roster depth, staff must be trained in new responsibilities, or at least cross-trained in different roles that they can assume if needed.


SaaS Tools: Major Threat Vector for Enterprise Security

When considering SaaS security risks, organizations have to take into account whether the SaaS provider is an established player or a startup, Lobo said. Established players have the resources to invest heavily in the security of their applications, and are less vulnerable to code injection attacks. Organizations do not have the auditing powers to measure an established vendor's security credentials and have no recourse but to trust the vendor. But when it comes to dealing with smaller companies, organizations can scrutinize encryption and cloud security practices, evaluate supply chains, check for vulnerabilities in the application code and conduct frequent security assessments. Lobo said many organizations today rely on services such as SecurityScorecard, UpGuard and similar companies that keep track of vulnerabilities in enterprise software and alert users, giving them the opportunity to patch third-party software prior to exploitation. Shankar Ramaswamy, solutions director at Bangalore-based IT consultancy giant WiproThe only way to do great work is to love what you do. –Steve Jobs, said organizations using third-party SaaS applications must focus on three major aspects - strengthen endpoint security, minimize the application' access to internal resources and replace passwords with multi factor authentication.



Quote for the day:

"The only way to do great work is to love what you do." -- Steve Jobs

Daily Tech Digest - April 14, 2024

Why small language models are the next big thing in AI

The complexity of tools and techniques required to work with LLMs also presents a steep learning curve for developers, further limiting accessibility. There is a long cycle time for developers, from training to building and deploying models, which slows down development and experimentation. ... Enter small language models. SLMs are more streamlined versions of LLMs, with fewer parameters and simpler designs. They require less data and training time—think minutes or a few hours, as opposed to days for LLMs. This makes SLMs more efficient and straightforward to implement on-site or on smaller devices. One of the key advantages of SLMs is their suitability for specific applications. Because they have a more focused scope and require less data, they can be fine-tuned for particular domains or tasks more easily than large, general-purpose models. This customization enables companies to create SLMs that are highly effective for their specific needs, such as sentiment analysis, named entity recognition, or domain-specific question answering. 


Navigating the AI revolution

Regulatory bodies like the European Union (EU) closely monitor data center energy usage through the Energy Efficiency Directive. This directive mandates that data center operators with a total rated power of 500 kilowatts or above are required to publicly report their energy performance data annually. An integral aspect of sustainability involves addressing 'Scope 3 emissions' under the Greenhouse Gas Protocol. While there’s a significant focus on Scope 1 and 2 responsibilities, which measure emissions from a data center’s own operations and electricity usage, Scope 3 emissions encompass the broader environmental impact of indirect emissions generated by data center operations outside its premises. ... The rise of AI technologies presents challenges and opportunities for the data center industry. By incorporating track busway solutions into data center infrastructure, operators can address the challenges posed by escalating power densities while contributing to significantly reducing Scope 3 emissions. This will position data centers to meet the demands of AI-driven workloads while ensuring their facilities’ continued reliability and energy efficiency.


Large language models generate biased content, warn researchers

Dr. Maria Perez Ortiz, an author of the report from UCL Computer Science and a member of the UNESCO Chair in AI at UCL team, said, "Our research exposes the deeply ingrained gender biases within large language models and calls for an ethical overhaul in AI development. As a woman in tech, I advocate for AI systems that reflect the rich tapestry of human diversity, ensuring they uplift rather than undermine gender equality." The UNESCO Chair in AI at UCL team will be working with UNESCO to help raise awareness of this problem and contribute to solution developments by running joint workshops and events involving relevant stakeholders: AI scientists and developers, tech organizations, and policymakers. Professor John Shawe-Taylor, lead author of the report from UCL Computer Science and UNESCO Chair in AI at UCL, said, "Overseeing this research as the UNESCO Chair in AI, it's clear that addressing AI-induced gender biases requires a concerted, global effort. This study not only sheds light on existing inequalities but also paves the way for international collaboration in creating AI technologies that honor human rights and gender equity. ..."


Enterprise of the Future: Disruptive Technology = Infinite Possibilities

In the current state, employees are required to navigate multiple enterprise application interfaces and browse many systems to get information for their day-to-day activities. It could be a simple activity such as fetching clarification on leave policies, checking for the leave balance or reporting an incident to have a laptop or printer issue fixed. To accomplish their daily responsibilities, employees often end up searching for standard operating procedures and other relevant data and knowledge. In general, to perform their day-to-day tasks, they must go through multiple enterprise application interfaces. This often results in a steep learning curve for the employees, necessitating them to remember where specific data and information resides, to understand the functionality of multiple enterprise applications and master the way they operate. This fragmentation of information across different data sources and the labyrinth of applications that need to be navigated to perform daily activities leads to inefficiencies and confusion that ultimately impact productivity. Moreover, changes within an organisation necessitate training on new systems and new ways of working, requiring employees to learn and adapt continuously.


The state of open source in Europe

Europe is renowned for regulations, and the past year has resulted in several large policy frameworks that influence tech — the Cyber Resilience Act (CRA), the Product Liability Directive, and the EU AI Act. With lots of information to digest and react to, both FOSDEM and SOOCON held deep-dive sessions. Over the past year, the CRA has been of the most concern to open-source communities, as it puts responsibility for harm caused by software into the hands of creators. For open-source software, this is complicated, as who is really responsible? The creator of the open-source software or its implementor? Many open-source projects have no legal entity that anyone can hold “responsible” for problems or harm. ... “‘Open-source software and hardware’ used to be enough to encompass the community and its aims and concerns. Now it’s ‘Open-source software, hardware, and data’.” An Open data movement, that aims to keep public data as freely accessible as possible, has existed for some time. The EU alone has nearly 2 million data sets. However, this past year saw the open-source community have to care about the openness of data in completely new ways.


Elemental Surprise: Physicists Discover a New Quantum State

“The search and discovery of novel topological properties of matter have emerged as one of the most sought-after treasures in modern physics, both from a fundamental physics point of view and for finding potential applications in next-generation quantum science and engineering,” said Hasan. “The discovery of this new topological state made in an elemental solid was enabled by multiple innovative experimental advances and instrumentations in our lab at Princeton.” An elemental solid serves as an invaluable experimental platform for testing various concepts of topology. Up until now, bismuth has been the only element that hosts a rich tapestry of topology, leading to two decades of intensive research activities. This is partly attributed to the material’s cleanliness and the ease of synthesis. However, the current discovery of even richer topological phenomena in arsenic will potentially pave the way for new and sustained research directions. “For the first time, we demonstrate that, akin to different correlated phenomena, distinct topological orders can also interact and give rise to new and intriguing quantum phenomena,” Hasan said.


The cloud is benefiting IT, but not business

According to a recent McKinsey survey that engaged about 50 European cloud leaders, the benefits of cloud migration still need to be recovered. In other words, cloud migrations are not as universally beneficial as we’ve been led to believe. I’m not sure why this is news to anyone. The central promise of cloud computing was to usher in a new era of agility, cost savings, and innovation for businesses. However, according to the McKinsey survey, only one-third of European companies actively monitor non-IT outcomes after migrating to the cloud, which suggests a less optimistic picture. Moreover, 71% of companies measured the impact of cloud adoption solely through the prism of IT operational improvements rather than core business benefits. This imbalance raises a critical question: Are the primary beneficiaries of cloud migration just the tech departments rather than the more comprehensive business entities they’re supposed to empower? Cloud computing technology is often associated with business agility and new revenue generation, but just 37% report cost savings outside of IT. Only 32% report new revenue generation despite having invested hundreds of millions of dollars in cloud computing.


Securing Mobile Apps: Development Strategies and Testing Methods

Ensuring secure data storage is crucial in today's technology landscape, especially for apps. It's vital to protect sensitive information and financial records to prevent unauthorized access and data breaches. Secure data storage includes encrypting information both at rest and in transit using encryption methods and secure storage techniques. Moreover, setting up access controls, authentication procedures, and conducting regular security checks are essential to uphold the confidentiality and integrity of stored data. By prioritizing these data storage practices and security protocols, developers can ensure that user information remains shielded from risks and vulnerabilities. Faulty encryption and flawed security measures can lead to vulnerabilities within apps, putting sensitive data at risk of unauthorized access and misuse. If encryption algorithms are weak or not implemented correctly, encrypted data could be easily decoded by actors. Poor key management, like storing encryption keys insecurely, worsens these threats. Additionally, security protocols lacking proper authentication or authorization controls create opportunities for attackers to bypass security measures.


How Do Open Source Licenses Work? The Ultimate Guide

The type of license a project creator chooses should be determined by legal counsel, as entering the realm of copyright law requires alignment with the project creators’ intentions. The type of license, which only qualified legal counsel can provide, should correspond to what the project creators want to achieve. Meanwhile, among popular licenses, the MIT License is comparatively permissive. It allows users to fork or copy the code freely, offering flexibility in how they utilize it. This stands in contrast to so-called “copyleft” licenses, like the GNU General Public License, which impose more stipulations. ... The process of changing a license can be complicated and challenging, emphasizing the importance of selecting the right license initially. When altering an open source project’s license, the new license must often comply with or be compatible with the original license, depending on its terms. Ensuring that the changes align with the copyright holders’ stakes in the project is crucial. This intricate process requires the guidance of competent legal counsel. It’s advisable to establish proper licensing from the project’s outset to avoid complexities later on.


6 Things That Will Tell You If You Are Destined for Leadership

Self-awareness about personality traits naturally leads to increased understanding and empathy when working with people who possess different traits than us. Instead of being unable to make sense of others' actions, we can analyze them and relate them to their inherent personality preference. This ability can prevent natural triggers towards the unknown from judging, blaming or taking things personally. ... Knowing about personality preferences is essential, but it's also crucial to be aware of how people prefer to handle change. Change is an event, but how change affects people is based on their personality traits. Some people embrace change so much they want it to be fast and vast. However, they tend to have difficulty staying focused and completing tasks. Others have an adverse reaction to change. They honor tradition and enjoy predictability.  ... Everyone has experienced negative situations, such as making the wrong choices, reacting to someone's words or behaviors with negative emotions or getting sucked into self-sabotaging thoughts. When you can bounce back from these situations with a positive mindset and be productive, you are exercising resilience.



Quote for the day:

"With desperation comes frustration. With determination comes purpose achievement, and peace." -- James A. Murphy