Showing posts with label training. Show all posts
Showing posts with label training. Show all posts

Daily Tech Digest - November 12, 2025


Quote for the day:

"Always remember, your focus determines your reality." -- George Lucas



Agentic AI and Solution Architects

Agentic AI tools are intelligent systems designed to operate with autonomy, agency, and authority—three foundational concepts that define their ability to act independently, pursue goals on behalf of users, and make impactful decisions within defined boundaries. These systems are often built using a multi-agent architecture, where multiple specialized or generalist agents collaborate, either in centralized or decentralized environments. ... As (IT) architects we drive change that creates business opportunities through technical innovation. One of the key activities of a Solution Architect is to design solutions by applying methods and techniques combined with technical and business expertise. The actual solution design process will follow a similar pattern to that of a creative technology design process. An architect will combine and group the different components together according to stakeholder group and will, over several sessions, develop concept views related to key architectural components, establishing different options. Deciding the “right” option will mean balancing the various criteria like functionality, value for money, compliance, quality, and sustainability. IT architecture design involves complex decision-making, planning, and problem-solving that require human expertise and experience. That is where most of the architect’s work is focused on – using knowledge and experience to research a particular subject, to apply design thinking and to solve problems to establish a solution. 


Shadow AI risk: Navigating the growing threat of ungoverned AI adoption

Only half (52%) of global organizations claim to have comprehensive controls in place, with smaller companies lagging even further behind. This lack of robust governance and visibility leaves organizations vulnerable to data breaches, compliance failures, and security risks. For many organizations, AI controls are lacking. ... As AI systems become more autonomous and capable of acting on behalf of users, the risks grow even more complex. The rise of agentic AI, which can make decisions and take independent action within systems, amplifies the impact of weak identity security controls. As these advanced AI systems are given more control over critical systems and data, the potential risk of security breaches and compliance failures grows exponentially. To keep pace, security teams must evolve their identity security strategies to include these emerging machine entities, treating them with the same rigor as human identities. ... To effectively mitigate the risks associated with shadow AI and ungoverned AI adoption, organizations need to start with a solid foundation of governance and visibility. That means implementing clear acceptable use guidelines, access controls, activity logging and auditing, and identity governance for AI entities. By treating AI entities as identities that are subject to the same authentication, authorization, and monitoring as human users, organizations can safely harness the benefits of AI without compromising security.


Secure Product Development Framework: More Than Just Compliance

Security risk assessment is a key SPDF activity that starts early in development and continues throughout the product life cycle through on-market support and eventual product retirement. FDA guidance references AAMI SW96, “Standard for medical device security - Security risk management for device manufacturers,” as a recommended standard for a security risk assessment process. Security risk assessment considers both safety and business security risks ... Implementing a clear and consistent security risk assessment process within the SPDF can also save time (and money). Focus can be placed on those areas of the design with the highest security risk, instead of on design areas with little to no security risk. Decisions on whether patches need to be applied in the field are easier to make when based on security risk. Leveraging the same security risk process across products and business areas allows teams to focus on execution rather than designing a new process. Once a product is launched, an SPDF can assist with managing that product. Postmarket SPDF activities include vulnerability monitoring/disclosure, patch management, and incident response. A critical component of vulnerability monitoring is the maintenance and continuous use of a software bill of materials (SBOM). The SBOM provides a machine-readable inventory of all custom, commercial, open-source, and third-party software components within the device. 


Vibe Coding Can Create Unseen Vulnerabilities

Vibe coding does accelerate app prototyping and makes software collaboration easier, but it also has several shortcomings. Security is a serious concern. Large language models (LLMs) are inherently vulnerable to security risks when used by those without sufficient security experience. Moreover, the risk is amplified by the fact that AI is so flexible that it’s impossible to give out simple, universal rules on how to make AI write secure code for you. LLMs may use outdated libraries, lack input validation, or fail to follow secure practices. AI code generators also lack an understanding of trust boundaries and system architectures. When using vibe coding, programmer oversight and review are necessary to prevent these issues from entering production code. Working with black-box code also makes it difficult to provide context about the app. For example, improper configurations may expose internal logic by sending sensitive code snippets to external APIs. This can be a real problem in highly regulated industries with strict rules about code handling. Vibe coding also tends to add technical debt, accumulating unreviewed or unexplained blocks of code. Over time, these code blocks proliferate, creating a glut and making code maintenance more difficult. Since less experienced developers tend to use vibe coding, they can overlook security issues. Consider the recent Tea Dating Advice hack. A hacker was able to access 72,000 images stored in a public 


The state of cloud-native computing in 2025

“We’ve reached a level of maturity in the cloud-native ecosystem that people might think that things are now a bit boring. While AI is a natural extension of Kubernetes and cloud-native architectures, there are changes required in the architecture to support AI workloads compared to previous workloads. Platform engineering continues to have strong customer interest… and new AI enhancements allow for even greater productivity for developers and operators. ...” said Miniman ... “However, runaway complexity and cost threaten to derail mass enterprise success. The modern observability stack has become an exorbitant black hole, delivering insufficient value for its exorbitant cost and demands a fundamental rethink of data management. Simultaneously, the data lakehouse gamble failed, proving too complex and expensive. The imperative is clear and necessitates pulling workloads back from the brink with democratized data management to pull workloads back onto central platforms,” said Zilka. ... “The focus has shifted from how quickly I can deploy, to how I can get a handle on costs and how resilient my platform is to changes or outages like we saw recently with AWS. Teams are recognising the overhead these technologies have introduced for developers and are centralising that work. We’re seeing more platform teams set best practices, use tooling to enforce them and move from “adoption mode” to “operational excellence,” said Rajabi.


Insurability now a core test for boardroom AI & climate strategy

Organisations face growing threats from data poisoning and cyber-attacks, prompting insurers to play a more decisive role in risk management. Levent Ergin, Chief Climate, Sustainability & AI Strategist at Informatica, highlighted the increasing scrutiny on what businesses can insure against. ... AI is now a fixture at board meetings due to its direct impact on company valuation. However, he observes a gap between current boardroom discussions and the transformative potential of AI. "AI is now a standing item in every board meeting because it directly shapes valuation. Investors see it as a signal of how forward-thinking a company really is. But many boards are still asking the wrong question: 'How can we use AI to automate or augment our existing processes?' when they should be asking 'What's possible?' It's not just about automating what already exists; it's about reimagining how things are done. ..." said Hanson. ... "Too many businesses still treat AI projects like any other investment, where the return has to be quantified against a specific outcome. In truth, they should be budgeting for failure. The best innovators plan for things not to work first time, just as pharmaceutical companies or tech giants do, because even a 98% failure rate can still produce world-changing results. The moment we stop fearing failure and start funding it, we'll see genuine AI innovation break through," said Hanson.


Are we in a cyber awareness crisis?

To improve cyber awareness, organizations need to move beyond box-ticking exercises and build engagement through relevance and creativity. This is the advice of Simon Backwell, a member of the Emerging Trends Working Group at professional association ISACA, and head of information security at software company Benefex. He advocates for interactive, rather than static training, where employees can explore why something was suspicious, as they learn by doing, rather than guessing the right answer and moving on. ... Not only does AI present new risks from its use within the business, but also from the way criminals are using it. “Email phishing attacks frequently use gen AI chatbots, and vishing attacks, such as robocall scams, now use deepfakes,” notes Candrick. “AI puts social engineering on steroids, yet cybersecurity leaders are still using the same awareness measures that were already insufficient.” While regulatory pressure will play a role in improving AI-related cybersecurity, regulations will always struggle to keep pace, especially in the UK where the process takes time. For example, the EU’s AI Act and Data Act is only now filtering through, much like GDPR did back in 2018, says Backwell. But with how fast AI is advancing – almost weekly – these rules risk becoming outdated as soon as they’re released. ... “As board alignment weakens, CISOs have to work harder to translate cyber risk into business impact, because boards now rank business valuation as their top post-incident concern,” says Cooke.


How to build a supercomputer

When it comes to Hunter’s architecture, Utz-Uwe Haus, head of HPC/AI EMEA research lab, at HPE, describes the Cray EX design as “the architecture that HPE, with its great heritage, builds for the top systems.” A single cabinet in an EX4000 system can hold up to 64 compute blades – high-density modular servers that share power, cooling, and network resources – within eight compute chassis, all of which are cooled by direct-attached liquid-cooled cold plates supported by a cooling distribution unit (CDU). “It's super integrated," he says. “The back part, which is the whole network infrastructure (HPE Slingshot), matches the front part, which contains the blades.” For Hunter, HLRS has selected AMD hardware, but Haus explains that with Cray EX systems, customers can, more or less, select their processing unit of choice from whichever vendor they want, and the compute infrastructure can be slotted into the system without the need to total reconfiguration. “Should HLRS decide at some point to swap [Hunter’s] AMD plates for the next generation, or use another competitor’s, the rest of the system stays the same. They could have also decided not to use our network – keep the plates and put a different network in, if we have that in the form factor. [HPE Cray EX architecture] is really tightly matched, but at the same time, it’s flexible," he says. Hunter itself is intended as a transitional system to the Herder exascale supercomputer, which is due to go online in 2027. 


The AI Reskilling Imperative: Bridging India's talent and gender gap

Policies should shift from less general policies to specific interventions. Initiatives such as Digital India and Skill India need to be bolstered with AI-specific courses available online in the local language. The government can: Sponsor and encourage scholarships and mentorships for Women in AI. Develop financial reward systems for companies reaching gender diversity in their AI teams. Introduce AI literacy and ethics into the national education system, beginning at the secondary school level. ... As the main consumer of AI talent, the private sector should be at the forefront. The first one is the skills-first approach to hiring, but reskilling as an ongoing investment is not an option. Companies should: Devote a huge proportion of CSR budgets to simple AI and digital literacy efforts, especially among women in low-income and rural communities; Launch internal reskilling programs to shift existing workers out of positions at risk of automation (e.g., manual software testing, simple data entry) and into new roles, such as AI integrators or data annotators; Embrace explicit ethical standards for the application of AI, including a workforce transition and support strategy. ... The universities will be obliged to redesign courses that incorporate AI's technical wisdom and infuse them with morals, critical thinking, and subject knowledge. Collaboration between industry and academia is important to ensure courses are practical and incorporate real-world projects.


Enterprises to focus AI spend on cost savings & data control

"CIOs will move from experimenting with AI to orchestrating it, governing outcomes, agents, and data. AI leadership will evolve from pilots to performance. CIOs will be accountable for tangible business outcomes, defining clear frameworks that connect AI investments to enterprise KPIs and ROI. That means managing a new hybrid workforce of humans and digital agents, complete with job descriptions, correlated KPIs and measurement standards, and governance guardrails. Yet none of this will succeed without secure information management, ensuring that the data fueling and training these agents is accurate, compliant, and trustworthy. Simply put, good data results in good AI outcomes. As AI accelerates, traditional network and security operations will be reimagined for an always-on, agent-driven enterprise, where value is derived as much from data discipline as from innovation itself," said Bell. ... "A Major brand fallout will force AI accountability. In the next year, we'll likely see a major brand face real damage from AI misuse. It won't be a cyberattack in the traditional sense but something more subtle, like a plain text prompt injection that manipulates a model into acting against intent. These attacks can force hallucinations, expose proprietary or sensitive information, or break customer trust in seconds. Enterprises will need to verify AI behavior the same way they secure their networks, by checking every input and output. The companies that build AI systems with accountability and transparency at the core will be those that keep their reputations intact," said Berry.

Daily Tech Digest - October 23, 2025


Quote for the day:

“The more you loose yourself in something bigger than yourself, the more energy you will have.” -- Norman Vincent Peale



Leadership lessons from NetForm founder Karen Stephenson

Co-creation is a hot buzzword encouraging individuals to integrate and create with each other, but the simplest way to integrate and create is in the mind of one person — if they’re willing to push forward and do it. Even further, what can an integrated team of diverse minds accomplish when they co-create? ... In the age of AI, humans will need to focus on what humans do well. At the moment, at least, that’s making novel connections, thinking by analogy and creating the new. Our single-field approach to learning, qualifications and career ladders makes it hard for us to compete with machines that are often smarter than we are in any given discipline. For that creative spark and to excel at what messy, forgetful, slow, imperfect humans do best, we need to work, think and live differently. In fact, the founders of five of the largest companies in the world are (or were) polymaths — mentally diverse people skilled in multiple disciplines — Bill Gates, Steve Jobs, Warren Buffett, Larry Page and Jeff Bezos. They learn because they’re curious and want to solve problems, not for a career ladder. It’s easier than ever, today, to learn with AI and online materials and to collaborate with tech and humans around the world. All you need to do is open inward to your talents and desires, explore, collect and fuse.


Why cloud and AI projects take longer and how to fix the holdups

In the case of the cloud, the problem is that senior management thinks that the cloud is always cheaper, that you can always cut costs by moving to the cloud. This is despite the recent stories on “repatriation,” or moving cloud applications back into the data center. In the case of cloud projects, most enterprise IT organizations now understand how to assess a cloud project for cost/benefit, so most of the cases where impossible cost savings are promised are caught in the planning phase. For AI, both senior management and line department management have high expectations with respect to the technology, and in the latter case may also have some experience with AI in the form of as-a-service generative AI models available online. About a quarter of these proposals quickly run afoul of governance policies because of problems with data security, and half of this group dies at this point. For the remaining proposals, there is a whole set of problems that emerge. Most enterprises admit that they really don’t understand what AI can do, which obviously makes it hard to frame a realistic AI project. The biggest gap identified is between an AI business goal and a specific path leading to it. One CIO calls the projects offered by user organizations as “invitations to AI fishing trips” because the goal is usually set in business terms, and these would actually require a project simply to identify how the stated goal could be achieved.


Who pays when a multi-billion-dollar data center goes down?

While the Lockton team is looking at everything from immersion cooling to drought, there are a handful of risks where it feels the industry isn't adequately preparing. “The big thing that isn't getting on people's radars in a growing way is customer equipment," Hayhow says “Looking at this through the lens of the data center owner or developer, it's often very difficult. “It's a bit of an unspoken conversation that the equipment in the white space belongs to the customer. Often you don't have custody over it, you don't have visibility over it, and it’s highly proprietary. But the value of it is growing.” Per square meter of white space, the Lockton partner suggests that the value of the equipment five years from now will be exponentially larger than the value of the equipment five years ago, as more data centers invest in expensive GPUs and other equipment for AI use cases. “Leases have become clearer in terms of placing responsibility for damage to customer equipment more squarely on the shoulders of the owner, developer,” Hayhow says. “We're having that conversation in the US, where the halls are larger, the value of the equipment is greater, and some of the hyperscale customers are being much more prescriptive in terms of wanting to address the topic of damage to our equipment … if you lose 20 megawatts worth of racks of Nvidia chips, the lead time to get those replaced, unless you're building elsewhere, is quite significant.”


AI Agents Need Security Training – Just Like Your Employees

“It may not be as candid as what humans would do during those sessions, but AI agents used by your workforce do need to be trained. They need to understand what your company policies are, including what is acceptable behavior, what data they're allowed to access, what actions they're allowed to take,” Maneval explained. ... “Most AI tools are just trained to do the same thing over and over and so it means decisions are based on assumptions from limited information,” she explained to Infosecurity. “Additionally, most AI tools solve real problems but also create real risks and each solve different problems and creates different risks.” While some cybersecurity experts argue that auditing AI tools is no different to auditing any other software or application, Maneval disagrees. ... Maneval’s said her “rule of thumb” is that whether you’re dealing with traditional machine learning algorithms, generative AI applications of AI agents, “treat them like any other employees.” This not only means that AI-powered agents should be trained on security policies but should also be forced to respect security controls that the staff have to respect, such as role-based access controls (RBAC). “You should look at how you treat your humans and apply those same controls to the AI. You probably do a background check before anyone is hired. Do the same thing with your AI agent. ..."


Why must CISOs slay a cyber dragon to earn business respect?

Why should a security leader need to experience a major cyber incident to earn business colleagues’ respect? Jeff Pollard, VP and principal analyst at Forrester, says this enterprise perception problem is “just part of human nature. If we don’t see the bad thing happening, we don’t appreciate all of the things that were done to prevent that bad thing from happening.” Of course, if an attack turns into an incident and defense goes poorly, “it can easily turn from a hero moment to a scapegoat moment,” Pollard says. Oberlaender, who now works as a cybersecurity consultant, is among those who believe hard-earned experience should be rewarded, but that’s not what he’s seeing in the market today. ... CISOs “feel that they need to fight off an attack to show value, but there are many other successes they can do and show,” says Erik Avakian, technical counselor at Info-Tech Research Group. “Building KPIs is a powerful way to show their value.” ... Chris Jackson, a senior cybersecurity specialist with tech education vendor Pluralsight, reinforces the frustration that many enterprise CISOs feel about the lack of appropriate respect from their colleagues and bosses. “CISOs are a lot like pro sports coaches. It doesn’t matter how well they performed during the season or how many games they won. If they don’t win the championship, it’s seen as a failure, and the coach is often the first to go,” Jackson says. 


The next cyber crisis may start in someone else’s supply chain

Organizations have improved oversight of their direct partners, but few can see beyond the first layer. This limited view leaves blind spots that attackers can exploit, particularly through third-party software or service providers. “We’re in a new generation of risk, one where cyber, geopolitical, technology, political risk, and other factors are converging and reshaping the landscape. The impact on markets and operations is unfolding faster than many organizations can keep up,” said Jim Wetekamp, CEO of Riskonnect. ... Third-party and nth-party risks continue to expose companies to disruption. Most organizations have business continuity plans for supplier disruptions, but their monitoring often stops at direct partners. Only a small fraction can monitor risks across multiple tiers of their supply chain, and some cannot track their critical technology providers at all. Organizations still underestimate how dependent they are on third parties and continue to rely on paper-based continuity plans that offer a false sense of security. ... More companies now have a chief risk officer, but funding for technology and tools has barely moved. Most risk leaders say their budgets have stayed the same even as they are asked to cover more ground. Many are turning to automation and specialized software to do more with what they already have.


Boardroom to War Room: Translating AI-Driven Cyber Risk into Action

Great CISOs today combine strategic leadership, financial knowledge, technological skills, and empathy to turn cybersecurity from a burden on operations into a strong enabler. This change happens faster with artificial intelligence. AI has a lot of potential, but it also makes things more uncertain. It can do things like forecast threats and automate orchestration. CISOs need to see AI problems as more than just technological problems; they need to see them as business risks that need clear communication, openness, and quick response. ... Not storytelling, but data and graphics win over executives. Suggested metrics include: Predictive accuracy - The percentage of risks that AI flagged before a breach compared to the percentage of threats that AI flagged after it happened; Speed of reaction - The average time it took for AI-enabled confinement to work compared to manual reaction; False positive rate - Tech teams employed AI to improve alerts and cut down on alert fatigue from X to Y; Third-party model risk - The number of outside model calls that were looked at and accepted; Visual callout suggestion - A mock-up of a dashboard that illustrates AI risk KPIs, a trendline of predictive value, and a drop in incidences. ... Change from being an IT responder who reacts to problems to a strategic AI-enabled risk leader. Take ownership of your AI risk story, keep an eye on third-party models, provide your board clear information, and make sure your war room functions quickly.


Govt. faces questions about why US AWS outage disrupted UK tax office and banking firms

“The narrative of bigger is better and biggest is best has been shown for the lie it always has been,” Owen Sayers, an independent security architect and data protection specialist with a long history of working in the public sector, told Computer Weekly. “The proponents of hyperscale cloud will always say they have the best engineers, the most staff and the greatest pool of resources, but bigger is not always better – and certainly not when countries rely on those commodity global services for their own national security, safety and operations. “Nationally important services must be recognised as best delivered under national control, and as a minimum, the government should be knocking on AWS’s door today and asking if they can in fact deliver a service that guarantees UK uptime,” he said. “Because the evidence from this week’s outage suggests that they cannot.” ... “In light of today’s major outage at Amazon Web Services … why has HM Treasury not designated Amazon Web Services or any other major technology firm as a CTP for the purposes of the Critical Third Parties Regime,” asked Hillier, in the letter. “[And] how soon can we expect firms to be brought into this regime?” Hillier also asked HM Treasury for clarification about whether or not it is concerned about the fact that “seemingly key parts of our IT infrastructure are hosted abroad” given the outage originated from a US-based AWS datacentre region but impacted the activities of Lloyds Bank and also HMRC.


Quantum work, federated learning and privacy: Emerging frontiers in blockchain research

It is possible to have a future in which the field of quantum computation could serve as the foundation for blockchain consensus. The future is alluring; quantum algorithms can provide solutions to the issues that classical computers find difficult and the method may be more effective and resistant to brute-force attacks. The danger, however, is significant: when quantum computers are sufficiently robust, existing encryption standards can be compromised. ... Federated learning is another upcoming element of blockchain studies, a machine learning model training technique that avoids data centralisation. Federated learning enables various devices or nodes to feed into a standard model instead of storing sensitive data in a central server inaccessible to third parties. ... The issue of privacy is of specific importance today due to the increased regulatory pressure on exchanges and cryptocurrency companies. A compromise between user privacy and regulatory openness could prove to be the key to success. Studies of privacy-saving instruments provide a competitive advantage to blockchain developers and for exchanges interested in increasing their influence on the global economy. ... The decade of blockchain research to come will not be characterised by fast transactions or cheaper costs. It will redraw the borders of trust, calculation, and privacy in digitally based economies. 


Ransomware groups surge as automation cuts attack time to 18 mins

The ransomware group LockBit has recently introduced "LockBit 5.0", reportedly incorporating artificial intelligence for attack randomisation and enhanced targeting options, with a focus on regaining its previous position atop the ransomware ecosystem. Medusa, by contrast, was noted to have fallen behind due in part to lacking widespread automated and customisable features, despite previous activity levels. ReliaQuest's analysis predicts the rise of new groups through the lens of its three-factor model, specifically naming "The Gentlemen" and "DragonForce" as likely to become major threats due to their adoption of advanced technical capabilities. The Gentlemen, for instance, has listed over 30 victims on its data-leak site within its first month of activity, underpinned by automation, prioritised encryption, and endpoint discovery for rapid lateral movement. Conversely, groups such as "Chaos" and "Nova" are likely to remain minor players, lacking the integral features associated with higher victim counts and affiliate recruitment. ... RaaS groups now use automation to reduce breakout times to as little as 18 minutes, making manual intervention too slow. Implement automated containment and response plays to keep pace with attackers. These workflows should automatically isolate hosts, block malicious files, and disable compromised accounts quickly after a critical detection, containing the threat before ransomware can be deployed.

Daily Tech Digest - April 09, 2025


Quote for the day:

"Don't judge each day by the harvest you reap but by the seeds that you plant." -- Robert Louis Stevenson



How AI and ML Will Change Financial Planning

AI adoption in finance does not come easily, because finance systems contain vast amounts of sensitive data, they are more susceptible to data breaches. Integrating AI systems with other components, such as cloud services and APIs, can increase the number of entry points that hackers might exploit. Hence, most of the finance executives cite data security as a top challenge. Limited AI skills is another hurdle, most of the finance orgs don’t have the skill set which leverage the AI in planning and budgeting activities. In early stages, high costs, staff resistance, lack of transparency, and uncertain ROI dominate. Other hurdles stay constant, such as data security and finding consistent data. As companies expand their use of AI, the potential for bias and misinformation rises, particularly as finance teams tap GenAI. Integrating AI solutions and tools into existing systems also presents more challenges As AI and ML continue to evolve, their role in financial planning will only grow. The ability to continuously adapt to new data, automate routine processes, and generate predictive insights positions AI as a critical tool for financial leaders. By embracing these technologies, businesses can transition from reactive financial management to proactive, data-driven decision-making that not only mitigates risks but also identifies new opportunities for growth.


The Augmented Architect: Real-Time Enterprise Architecture In The Age Of AI

No human can know everything about a modern digital enterprise. AI doesn’t pretend to either — but it remembers everything and brings the right detail to the fore at the right time. Think of it as a cognitive prosthetic for the architect: surfacing precedents, warnings, and rationale at the point of decision. ... Visibility isn’t just about having access to data — it’s about trust in its freshness. Real-time integration with operational sources (observability platforms, configuration systems, source control, deployment records) ensures that the architecture graph is never out of date. The haystack becomes a needle-sorter. ... Architecture artifacts multiply: PowerPoints, spreadsheets, PDFs, whiteboards. But in an agentic system, everything is rendered on demand from the same graph (and its associated unstructured content, linked via vector embeddings). Want a heatmap of system risks? A regulatory trace? A roadmap to sunset legacy? One prompt, one view — consistent, explainable, and composable. And those unstructured artifacts? An agent is happy to harvest new insights from them back into the knowledge store. ... Review boards become decision accelerators instead of speed bumps. Agents pre-check submissions. Exceptions, not compliance, become the focus. Draft decisions are generated and validated before the meeting even starts. 


Choosing the Most Secure Cloud Service for Your Workloads

Managed cloud servers offer the security benefit of being relatively simple to configure and operate. Simplicity breeds security because the fewer variables you have to work with, the lower the risk of making a mistake that will lead to a breach. On the other hand, managed cloud servers are subject to a relatively large attack surface. Threat actors could target multiple components, including the operating systems installed on server instances, individual applications, and network-facing services. ... If you deploy containers using a managed service like AWS Fargate or GKE, you get many of the same security advantages as you enjoy when using serverless functions: The only vulnerabilities and misconfigurations you have to worry about are ones that impact your containers. The cloud provider bears responsibility for securing the host infrastructure. This isn't true, however, if you deploy containers on infrastructure that you manage yourself — by, for example, creating a Kubernetes cluster using nodes hosted on EC2. In that case, you end up with a broad and complex environment, making it quite challenging to secure. ... Note, too, that containers tend to be complex. A single container image could include code drawn from many sources. 


The Invisible Data Battle: How AI Became a Cybersec Professional’s Biggest Friend and Foe

With all of these boobytraps and stonewalling techniques in mind, cybersec professionals have been working on smart scrapers for years, and they’re finally here. A “smart” or “adaptive” scraper uses natural language processing (NLP) and machine learning to handle dynamic content and intricate website architectures (e.g., nested categories and varied page layouts), bypass IP blocking and rate limiting via rotating proxies, deal with CAPTCHAs, login forms and cookies — and even provide real-time data updates. For instance, adaptive scrapers can identify the structure of a web page by analyzing its document object model (DOM) or by following specific patterns, and this allows for dynamic adaptation. AI models like convolutional neural networks (CNNS) can also detect and interact with visual elements on websites, such as buttons. In fact, smart scrapers can even mimic human browsing patterns with random pauses, mouse movements and realistic navigation sequences that bypass behavioral analysis tools. And that’s not all. AI-powered web scrapers can modify browser configurations to mask telltale signs of automation (such as headless browsers that run without a traditional graphical interface) that anti-bot systems look for. 


The Agile Advantage: doubling down on the biggest business challenges

Agile practices have been gaining popularity, with 51% of respondents indicating their organisations actively use Agile to organize and deliver work. However, the data reveals inconsistencies in how the benefits of Agile are perceived across teams and organisations. ... Regardless of whether teams fully embrace Agile practices completely, there are opportunities for leaders to bring forward Agile principles to address the unique challenges of modern work. While leaders may feel confident in their teams’ direction, the lack of alignment experienced by entry-level employees can have serious repercussions. Feedback from these employees can serve as a valuable indicator of how effectively an organisation integrates Agile practice–and the data clearly shows there is considerable room for improvement. For organisations of any size, addressing these gaps is imperative. Leaders must adopt consistent tools and frameworks that enhance training, improve communication and foster greater alignment across teams. Proactively tackling these issues early can alleviate future issues like misalignment and burnout, while building a more cohesive and resilient organisation. 


The Strategic Evolution of IT: From Cost Center to Business Catalyst

The most successful organizations recognize that technology-driven transformation requires more than just implementing new solutions — it demands an organization-wide cultural shift. This means evolving IT teams from traditional "order-takers" to influential decision-makers who help shape and execute business strategy. The key lies in creating an environment where innovation thrives and tech professionals feel empowered to contribute their unique perspectives to business discussions. Organizations must invest in both the technical and business acumen of their IT talent. A dual focus on these areas enables teams to better understand the broader business context of their work and contribute more meaningfully to strategic discussions. When IT professionals can speak the languages of both technology and business, they become invaluable partners in driving broader innovation. Success in this area requires a commitment to continuous learning, mentorship programs and creating opportunities for cross-functional collaboration that expose IT teams to diverse business challenges and perspectives. ... With technology continuing to reshape industries and markets, the question is no longer whether tech professionals should have a seat at the strategic table, but how to maximize its potential and impact on business success.


Is HR running your employee security training? Here’s why that’s not always the best idea

“HR departments may not be fully aware of current cyber threats or the organization’s specific risks,” she says. This can result in overly broad or generic training, which reduces its effectiveness. These programs can also fail to emphasize the practical, real-world application of security practices or offer enough guidance on addressing threats if they lack collaboration with security and IT teams.” HR may not effectively tailor the training to the organization’s industry-specific threats, Murphy notes. Without the security department’s involvement, training content often lacks focus and fails to address the company’s unique threats, leaving employees unsure of what to watch for. ... However, while HR shouldn’t run employee security training, Willett does view the HR team as a key partner. He suggests a collaborative approach where HR and security teams work together, leveraging their respective strengths. He explains that HR can help translate complex technical information into understandable language, while the security team provides the core content and technical expertise. ... HR has skin in the game for employee onboarding, compliance, and adherence to company policies and practices, according to Hughes. 


Why CISOs are doubling down on cyber crisis simulations

“It was once enough to theorise risk identification through using risk matrixes and lodging them in a spreadsheet describing threats and their likelihood of materialising,” says Aaron Bugal, Field CISO, APJ at Sophos. “However, looking at the impact caused by ransomware and subsequent extortion demands sending executive teams and board members into a spin, highlights the lack of understanding of how pervasive cyber criminals are and the opportunities they take.” To move beyond theoretical planning, Bugal advocates for breach simulations as a practical step forward. “A simulation of a breach will allow you to draw out the concise and well-measured response actions that are demanded by you and your organisation,” he explains. Bringing together a cross-section of executives helps uncover gaps in readiness. “Physically sitting with a cross section of executives, board members, human resources, IT, security, legal and public relations will ilk out the procedures, responsibilities and resources needed to respond with efficacy.” By running these exercises in advance, organizations can avoid the chaos of real-time crisis management. “Simulations provide a structured approach to build and refine a breach response while playing it out and discovering where improvements are needed,” Bugal adds, “rather than learning and panicking whilst under the pressure of an active attack.”


Google Cloud Security VP on solving CISO pain points

On the strategic side, Bailey said CISOs are asking for a middle ground between highly integrated platforms and the flexibility of best-of-breed tools. "They want best of breed with the limited toil of what a platform gives," he said. "They're tired of integrations constantly breaking." Bailey also discussed how the role of development-level security – often called DevSecOps – is increasingly being absorbed into security operations. "The CISO is going to have responsibility for all these problems," he said. "Visibility into what's being deployed, compliance reporting, and detection on application code – that's all coming into SecOps." Another emerging front is model protection. Google's Model Armour and AI Protection aim to defend not just infrastructure but also the AI models themselves. "If a bad prompt starts coming through, we can help block that," Bailey said. "We're putting security controls around development environments, models, data and prompts." The Mandiant brand, once synonymous with incident response, has found new life as both a consulting arm and a foundation for content in Google Threat Intelligence. "Mandiant is our consulting practice," Bailey said. "It's also where our elite threat hunters live – a lot of them are ex-Mandiant, and they're integrated with our consulting team to operationalise what they see on the front lines."


Shadow Table Strategy for Seamless Service Extractions and Data Migrations

The shadow table strategy maintains a parallel copy of data in a new location (the "shadow" table or database) that mirrors the original system’s current state. The core idea is to feed data changes to the shadow in real time, so that by the end of the migration, the shadow data store is a complete, up-to-date clone of the original. At that point, you can seamlessly switch to the shadow copy as the primary source. ... Transitioning from a monolithic architecture to a microservices-based system requires more than just rewriting code; you often must carefully migrate data associated with specific services. Extracting a service from a monolith risks inaccuracy if you do not transfer its dependent data accurately and consistently. Here, shadow tables play a crucial role in decoupling and migrating a subset of data without disrupting the existing system. In a typical service extraction, the legacy system continues to handle all live operations while developers build a new microservice to handle a specific functionality. During extraction, engineers mirror the data relevant to the new service into a dedicated shadow database. Whether implemented through triggers or event-based replication, the dual-write mechanism ensures that the system simultaneously records every change made in the legacy system in the shadow database.

Daily Tech Digest - March, 02, 2021

Looking For An AI Ethicist? Good Luck

Just like with the hunt for data scientists, the person in charge of driving the AI ethics strategy at a company ideally will have a long list of qualifications. According to Ammanath, who was a Datanami Person to Watch for 2020, an AI ethicist generally should have the following skills and capabilities: An understanding of AI tools and technology; An understanding of the business and the industry and the specific AI ethical traps that exist in them; Good communication skills and the ability to work across organizational boundaries; And regulatory, legal, and policy knowledge. There are additional skills that may be required, such as having experience with the philosophical, psychological, or sociological aspects of ethics; knowing how to structure a business and a team in an ethical manner; and even knowing how to mitigate the environmental impact of using AI. “The point is that you need to have a wide variety of skills,” Ammanath says. “It’s like finding that unicorn…Trying to find that person with credible experience and knowledge in all of these areas is practically impossible.” So where does that leave you? The odds are, unless you’re working at a very large enterprise, you won’t be able to find a person to fit this exact job description.


Building a Next-Generation SOC Starts With Holistic Operations

Today's reimagined SOCs bring together disparate teams to counteract intrusions, providing everyone with a coordinated, holistic, real-time view. This tactic empowers analysts to head things off, "shifting left" in the cyber kill chain to identify the full scope of the attack while it's happening and quickly block it as far upstream as possible (ideally using automated investigation and response). We see this as the only way for SOCs to address new threats in time to avert major business impacts. It's time to empower your SOC with multidomain, central teams. It's more than tools differentiating a reactive SOC from an agile, proactive, successful one. Modernizing security operations requires an operational model that drives cross-technology integration to match the attacker's modus operandi. Empowering your SOC to deploy speedy, effective countermeasures means dangerous attackers will be slowed or deterred, reducing damage to your business and saving valuable time and money. The proper template for a modernized SOC team operates seamlessly across domains with an end-to-end view. Consider your SOC's opposition: Sophisticated bad actors see the entire picture, know where they're going and who they're engaging, and understand how to exploit weaknesses.


Can we explain AI? An Introduction to Explainable Artificial Intelligence.

Why do we need to explain AI? This is a question that has no simple answer to it. Suppose you take the example of my project that I mentioned initially. In that case, the controller might want to understand our trust models. It is hard to believe something we do not understand. We have a problem when we cannot explain the decisions made by an algorithm. In assessing AI’s decisions, it is crucial to assess the factors that led to that decision. We will therefore be able to audit and challenge decisions or work to improve the factors. This is where the importance of xAI, or explainable AI, comes in, which addresses the need to be able to interpret a model of Machine Learning. This is because it is typical for the formulation of problems addressed by ML to be incomplete. Often, forecasting is not enough to address a problem. It is essential to know more than just “what,” but also “why,” “how.” It is not enough to know that a teacher has been poorly classified in one year; it is also essential to know the reason for improvement. Although AI is one of the most important and disruptive technologies of the century, it is subject to bias. Good model accuracy can be a trap.


Why IT Should Have a Separate Training Budget

Large IT organizations can fund their own training departments, complete with their own training directors. Often these individuals have experience in both IT and education -- and they do a great job. But in many other cases, there is no formal IT training function -- only an IT training budget. In these cases, the CIO, project managers and other IT leadership must step in. They identify the core skills that they need and the individuals whom they want to send to these trainings -- and what the training will cost. This strategy of collectively evaluating IT staff, with each manager coming forth with his or her staff training needs, works -- but it’s far from flawless. The major downside is that people who are not skilled in education or training might not make the right training choices -- either in courses or in the people they send. ... Hot projects and keeping systems running are IT priorities, not training. So, if there is a hot project, or a major performance issue with an existing system, training is quickly forgotten. The result is that training that was budgeted gets deferred or isn't used at all. This makes for a very tough fight for the CIO when the next budget review comes around. The CFO will undoubtedly challenge the IT training budget, saying that the budget was underused last year so should be re-funded at that lesser level.


Indian Vaccine Makers, Oxford Lab Reportedly Hacked

The Chinese state-backed hacking group APT10, also known as Stone Panda, has in recent weeks targeted the IT systems of two Indian pharmaceutical makers whose coronavirus vaccines are being used in the country's immunization program, the Reuters news service reports, citing a report from Tokyo, Japan-based cybersecurity firm Cyfirma. That company says that hackers identified gaps and vulnerabilities in the IT infrastructure and supply chain software of the pharmaceutical firm Bharat Biotech and the Serum Institute of India, or SII, one of the largest vaccine makers globally, Reuters reports. Cyfirma says the apparent motivation behind the hackers' efforts was an attempt to exfiltrate intellectual property of the pharmaceutical firms, according to Reuters. SII is making the AstraZeneca vaccine for many countries and will soon start bulk-manufacturing Novavax shots, the news service reports. Cyfirma, SII and Bhara Biotech did not immediately respond to Information Security Media Group's requests for comment. ... Meanwhile, last week, Forbes reported that U.K.-based Oxford University's Division of Structural Biology – known as Strubi - had been hacked, with equipment used to prepare biochemical samples targeted.


Rethinking the artificial intelligence race

The way that AI systems are developed naturally creates doubts about their ability to function in untested environments, namely the requirement of large amounts of data inputs, the necessity that they be nearly perfect, and the effects of the preconceived notions of its creators. First, lack of, or erroneous, data is one of the largest challenges, especially when relying on machine learning techniques. To teach a computer to recognize a bird, it must be fed thousands of pictures to “learn” a bird’s distinguishing features, which naturally limits use in fields with few examples. Additionally, if even a tiny portion of the data is incorrect (as little as 3%), the system may develop incorrect assumptions or suffer drastic decreases in performance. Finally, the system may also recreate assumptions and prejudices—racist, sexist, elitist, or otherwise—from extant data that already contains inherent biases, such as resume archives or police records. These could also be coded in as programmers inadvertently impart their own cognitive biases into the machine learning algorithms they design. This propensity for deep-seated decision-making problems, which may only become evident well after development, will prove problematic to those that want to rely heavily on AI, especially concerning issues of national security.


How Leaders Can Help Their Teams Manage Stress in the New Year

Employees need to take vacations to reset and get their minds off of their work, but modern work policies don’t encourage time off the way they should. Plenty of companies offer generous or even unlimited amounts of vacation time, but workers are reticent to indulge lest they fall behind. The easiest solution to this issue is to simply mandate that workers take the time off they need. To combat the high-stress levels endemic to companies in their industry, game developer Supergiant Games instituted a policy stating that workers must take a minimum of 20 days off annually while still allowing for unlimited time away. A similar policy for your workplace will help employees cool off right when they need to the most. ... Your workers will never be able to achieve stress equilibrium if their boss can’t do it first. Being a great business leader isn’t just about telling people what they need to do; it’s about modeling those behaviors yourself. If you’re preaching stress reduction to your team while clocking in 11 hours a day, no one is going to be able to take your messaging seriously. Stress management starts with you, whether you like it or not.


Google Introduces Low Bitrate Speech Codec For Smoother Communication

Lyra is a novel method for compressing and transmitting voice signals. For this, the researchers applied traditional codec techniques and the latest machine learning methods on models trained on vast amounts of data. Lyra extracts features or distinctive speech attributes (list of numbers representing the speech energy in different frequency bands, called log mel spectrograms) from the input every 40ms and compresses before transmitting. At the receiving end, a generative model converts the features to a speech signal. Lyra’s new and improved ‘natural-sounding’ generative models maintain a low bitrate of codecs to achieve high-quality codecs, generally on par with state-of-art waveform codecs used in streaming platforms. However, one drawback of these generative models is computational complexity. To overcome this, Lyra uses a cheaper variation of WaveRNN, a recurrent generative model. Though it works at a lower rate, it generates multiple parallel signals in different frequencies. These signals are then combined to output a signal at the desired sample rate. Hence, Lyra works on cloud servers and mid-range phones with a processing latency of 90ms.


Cryptomining Botnet Uses Bitcoin Wallet to Avoid Detection

The initial infection starts with the exploitation of remote code execution vulnerabilities in Hadoop Yarn, Elasticsearch (CVE-2015-1427) and ThinkPHP (CVE-2019-9082). The payload delivered causes the vulnerable machine to download and execute a malicious shell script. "In older campaigns, the shell script itself handled the key functions of infection. The stand-alone script disabled security features, killed off competing infections, established persistence, and in some cases, continued infection attempts across networks found within the known host files," the report notes. But the newer instances of the shell script are written with fewer lines of code and use binary payloads for handling more system interactions, such as killing off competition, disabling security features, modifying SSH keys, downloading malware and starting the miners. Researchers note that the operators behind the campaign use cron jobs and rootkits for persistence and updates to distribution, ensuring infected machines will regularly check in and be reinfected with the latest version of the malware. These methods rely on domains and static IP addresses written into crontabs and configurations, and these domains and IP addresses routinely get identified and seized, the researchers say.


Saga Orchestration for Microservices Using the Outbox Pattern

There are two general ways for implementing distributed Sagas—choreography and orchestration. In the choreography approach, one participating service sends a message to the next after it has executed its local transaction. With orchestration, on the other hand, there’s one coordinating service that invokes one participant after the other. Both approaches have their pros and cons. Personally, I prefer the orchestration approach, as it defines one central place that can be queried to obtain the current status of a particular Saga (the orchestrator, or “Saga execution coordinator,” SEC for short). Since it avoids point-to-point communication between participants, (other than the orchestrator), it also allows for the addition of further intermediary steps within the flow, without the need to adjust each participant. Before diving into the implementation of such Saga flow, it’s worth spending some time to think about the transactional semantics that Sagas provide. ... From a service consumer point of view—e.g., a user placing a purchase order with the order service—the system is eventually consistent; i.e., it will take some time until the purchase order is in its correct state, as per the logic of the different participating services.



Quote for the day:

"In any leadership position, the most important aspect of your job will be getting your team to work together." -- Dale Brown

Daily Tech Digest - April 23, 2020

Indian IT desperately needed a new business model and coronavirus gave it one

remote-working-jeonghwaryu0.jpg
Some IT companies have implemented "employee productivity trackers like webcam-based movement capture, hourly timesheet entry, tracking of keyboards, and so on, to ensure employees are working at home," Yugal Joshi, vice-president at Texas-based consultancy Everest Group, told Quartz. "This indicates a deep-rooted malaise in Indian IT/ITes industry where the senior management generally mistrusts people," he added. Two, unlike the retail or manufacturing sectors that cannot operate with current social distancing norms, the top-tier Indian IT companies and their mid-sized brethren are responsible for keeping the lights on for a large collection global companies -- some of whom are depended on people every second of the day. This includes banks, utility companies, retailers, and, of course, pharmaceuticals. With the ongoing coronavirus outbreak, all of these industries are now being serviced from the apartments and houses of India's IT workforce, which as you can imagine, is a supremely difficult and exasperating task for everyone involved. Most of IT's clients have ironclad regulatory and privacy riders that have needed to be tweaked considerably in light of coronavirus.



How a basic cross-training program can ease disruptions on the IT team

If the coronavirus hasn't disrupted your business operations yet, there's a good chance it will soon. This first wave of illness will not be the last time the coronavirus disrupts daily business operations. First companies had to adjust to remote work for all employees. The next challenge may be filling in for colleagues who are out sick or caring for family members or friends who are ill. A cross-training program can make this transition go smoothly. Sam Maley, an IT operations manager at Bailey & Associates, an IT consultancy, said cross-training can minimize disruptions and reduce stress levels due to absenteeism. "Cross-training programs are designed to build versatility and skill overlaps in your team members," he said. Jeff Fleischman, CMO at the consulting firm Altimetrik, said cross-training needs to be part of business continuity plans. "To receive buy-in from top management, quantify the impact disruption has on the business such as revenue loss, reputational risk, defaulting on contractual obligations, and failing to meet regulatory requirements, and then explain how cross-training would eliminate these risks," Fleischman said.


Kubernetes vs. VMware: Drive the choice with IT architecture


The choice to run either containers in VMs vs. VMs in containers is an architectural design decision. This is because there's a line of thought that containers are the ideal abstraction for multi-cloud application delivery. Though VMware assures admins containers and VMs are the same in vSphere, it's difficult to draw a similar comparison for Kubernetes and VMs. Kubernetes is an orchestration product that admins use primarily for containers. In theory, Kubernetes could manage compute resources other than containers. However, a container as the primary abstraction layer means that traditional VM management tools don't map directly. Though networking can help solve this issue, KubeVirt could be the answer. KubeVirt uses Kubernetes network architecture and plugins rather than hypervisor abstractions, such as vSwitches, to manage networking. As a result, products must switch to network management based on Kubernetes namespaces. That's not necessarily a bad thing; it's just an overall change in operations mode from a VM-centric operating model to a container-centric operating model.



Researchers Release Open Source Counterfactual Machine Learning Library

Three Counterfactuals for Loan Application Scenario
Exactly what machine learning counterfactuals are, and the reasons why they are important, are best explained by example. Suppose a loan company has a trained ML model that is used to approve or decline customers' loan applications. The predictor variables (often called features in ML terminology) are things like annual income, debt, sex, savings, and so on. A customer submits a loan application. Their income is $45,000 with debt = $11,000 and their age is 29 and their savings is $6,000. The application is declined. A counterfactual is change to one or more predictor values that results in the opposite result. For example, one possible counterfactual could be stated in words as, "If your income was increased to $60,000 then your application would have been approved." In general, there will be many possible counterfactuals for a given ML model and set of inputs. Two other counterfactuals might be, "If your income was increased by $50,000 and debt was decreased to $9,000 then your application would have been approved" and, "If your income was increased to $48,000 and your age was changed to 36 then your application would have been approved." Figure 1 illustrates three such counterfactuals for a loan scenario.


What is value stream mapping? A lean technique for improving business processes

What is value stream mapping? A lean technique for improving business processes
Before you can start building a value stream map, you need to objectively evaluate your organization’s business processes, products and systems. Start by talking to leadership, department heads and other key stakeholders who can give you more insight into what can be improved. You’ll need to get hands-on experience with the process, product or system yourself and have other employees walk you through their part. It’s important to collect as much data as possible — for example, any inefficiencies in the process, how many workers are involved, what resources are used and any downtime. Any potentially relevant or noteworthy data is helpful in fleshing out your final VSM flow chart and achieving insights into what can be refined or improved. You’ll then create two separate VSM flow charts — a current state value stream map and a future state value stream map. Your current state VSM will be used to establish how the process currently runs and functions in the business. This is where you will demonstrate issues, significant findings and establish key requirements. The future state VSM, on the other hand, focuses on what your process will look like once your organization has completed all of the necessary improvements.


Ethernet consortium announces completion of 800GbE spec 

Network Networking Ethernet
Based on many of the technologies used in the current top-end 400 Gigabit Ethernet protocol, the new spec is formally known as 800GBASE-R. The consortium that designed it (then known as the 25 Gigabit Ethernet Consortium) was also instrumental in developing the 25, 50, and 100 Gigabit Ethernet protocols and includes Broadcom, Cisco, Google, and Microsoft among its members. The 800GbE spec adds new media access control (MAC) and physical coding sublayer (PCS) methods, which tweaks these functions to distribute data across eight physical lanes running at a native 106.25Gbps. (A lane can be a copper twisted pair or in optical cables, a strand of fiber or a wavelength.) The 800GBASE-R specification is built on two 400 GbE 2xClause PCSs to create a single MAC which operates at a combined 800Gbps. And while the focus is on eight 106.25G lanes, it's not locked in. It is possible to run 16 lanes at half the speed, or 53.125Gbps. The new standard offers half the latency of 400G Ethernet specification, but the new spec also cuts the forward error correction (FEC) overhead on networks running at 50 Gbps, 100 Gbps, and 200 Gbps by half, thus reducing the packet-processing load on the NIC.


Application performance for remote workers becomes primary network issue for businesses


In addition to the top-line finding of dealing with complexity and performance, the study also highlighted that cost had become less of an issue for respondents, who also cited significant investment in automation, security, cloud connectivity and the potential of 5G. Drilling deeper into the pressing issues for firms, Aryaka found that as the number of remote workers increases across the globe, productivity and remote application performance have become more important for organisations across Europe, the Middle East and Africa (EMEA). Some 45% of UK businesses noted that slow application performance led to a poor user experience for remote and mobile users, and that it was a significant issue faced by IT and support teams. Accessing and integrating cloud and software-as-a-service (SaaS) applications was one of the most pressing issues for UK IT departments, cited by 39%.


Ransomware is now the biggest online menace you need to worry about - here's why


One of the reasons why ransomware attacks have risen so much is because cyber criminals are increasingly viewing it as the simplest and quickest means of making money from compromised networks. With ransomware, attackers can lockdown an organisation's entire network and demand a bitcoin payment in exchange for the decryption key. Ransomware attacks are often successful because organisations opt to pay the ransom demand, viewing it as the quickest and easiest way to restore functionality to the network, despite authorities warning never to give into the demand of extortionists. These ransomware demands commonly reach six-figure sums and, because the transfer is made in bitcoin, it's relatively simple for the criminals to launder it without it being traced back to them. "The 'beauty' of the ransomware model is you only need to write the ransomware once and its potential to infect is only limited by its reach, which with the internet is unlimited," Ed Williams, EMEA director of SpiderLabs, the research division at Trustwave, told ZDNet.


Remote business continuity techniques to implement now


This is not just an issue when facing a pandemic. If your business continuity plan addresses only short-term disruptions, such as those that last less than a month, it may not be prepared for an extended outage. Your technology disaster recovery plan may need to be activated, assuming outages occur due to insufficient IT staff available or technology disruptions that occur due to a shortage of vendor personnel. Fortunately, many data centers are designed to operate without human intervention or with remote access to system administration functions. Technology vendors frequently use managed IT resources such as cloud-based systems to support their service offerings. This reduces the likelihood of outages as long as the managed service providers are able to keep their systems operational. As many organizations use remotely hosted applications, users can keep using those systems, so long as their vendors are able to keep their operations working. The real challenge for organizations that have mostly locally hosted systems and databases is to remotely manage those assets.


New Enterprise Graph Framework for Data Scientists Leverages Machine Learning

The new Neo4j for Graph Data Science framework is designed to enable data scientists to operationalize better analytics and machine learning models that infer behavior based on connected data and network structures Frame described. The framework, she said in a statement announcing the product release, is intended to provide the most expeditious way to generate better predictions. "A common misconception in data science is that more data increases accuracy and reduces false positives," she explained. "In reality, many data science models overlook the most predictive elements within data -- the connections and structures that lie within. Neo4j for Graph Data Science was conceived for this purpose -- to improve the predictive accuracy of machine learning, or answer previously unanswerable analytics questions, using the relationships inherent within existing data." 



Quote for the day:


"Leadership is the wise use of power. Power is the capacity to translate intention into reality and sustain it." -- Warren Bennis