Showing posts with label private cloud. Show all posts
Showing posts with label private cloud. Show all posts

Daily Tech Digest - June 21, 2025


Quote for the day:

“People are not lazy. They simply have important goals – that is, goals that do not inspire them.” -- Tony Robbins


AI in Disaster Recovery: Mapping Technical Capabilities to Real Business Value

Despite its promise, AI introduces new challenges, including security risks and trust deficits. Threat actors leverage the same AI advancements, targeting systems with more precision and, in some cases, undermining AI-driven defenses. In the Zerto–IDC survey mentioned earlier, for instance, only 41% of respondents felt that AI is “very” or “somewhat” trustworthy; 59% felt that it is “not very” or “not at all” trustworthy. To mitigate these risks, organizations must adopt AI responsibly. For example, combining AI-driven monitoring with robust encryption and frequent model validation ensures that AI systems deliver consistent and secure performance. Furthermore, organizations should emphasize transparency in AI operations to maintain trust among stakeholders. Successful AI deployment in DR/CR requires cross-functional alignment between ITOps and management. Misaligned priorities can delay response times during crises, exacerbating data loss and downtime. Additionally, the ongoing IT skills shortage is still very much underway, with a different recent IDC study predicting that 9 out of 10 organizations will feel an impact by 2026, at a cost of $5.5 trillion in potential delays, quality issues, and revenue loss across the economy. Integrating AI-driven automation can partially mitigate these impacts by optimizing resource allocation and reducing dependency on manual intervention.


The Quantum Supply Chain Risk: How Quantum Computing Will Disrupt Global Commerce

Whether its API’s, middleware, firmware embedded devices or operational technology, they’re all built on the same outdated encryption and systems of trust. One of the biggest threats from quantum computing will be on all this unseen machinery that powers global digital trade. These systems handle the backend of everything from routing to cargo to scheduling deliveries and clearing large shipments, but they were never designed to withstand the threat of quantum. Attackers will be able to break in quietly — injecting malicious code into control software, ERP systems or impersonating suppliers to communicate malicious information and hijack digital workflows. Quantum computing won’t necessarily affect the industries on its own, but it will corrupt the systems that power the global economy. ... Some of the most dangerous attacks are being staged today, with many nation-states and bad actors storing encrypted data, from procurement orders to shipping records. When quantum computers are finally able to break those encryption schemes, attackers will be able to decrypt them in what’s coined a Harvest Now Decrypt Later (HNDL) attack. These attacks, although retroactive in nature, represent one of the biggest threats to the integrity of cross-border commerce. Global trade depends on digital provenance or handling goods and proving where they came from. 


Securing OT Systems: The Limits of the Air Gap Approach

Aside from susceptibility to advanced techniques, tactics, and procedures (TTPs) such as thermal manipulation and magnetic fields, more common vulnerabilities associated with air-gapped environments include factors such as unpatched systems going unnoticed, lack of visibility into network traffic, potentially malicious devices coming on the network undetected, and removable media being physically connected within the network. Once the attack is inside OT systems, the consequences can be disastrous regardless of whether there is an air gap or not. However, it is worth considering how the existence of the air gap can affect the time-to-triage and remediation in the case of an incident. ... This incident reveals that even if a sensitive OT system has complete digital isolation, this robust air gap still cannot fully eliminate one of the greatest vulnerabilities of any system—human error. Human error would still hold if an organization went to the extreme of building a faraday cage to eliminate electromagnetic radiation. Air-gapped systems are still vulnerable to social engineering, which exploits human vulnerabilities, as seen in the tactics that Dragonfly and Energetic Bear used to trick suppliers, who then walked the infection right through the front door. Ideally, a technology would be able to identify an attack regardless of whether it is caused by a compromised supplier, radio signal, or electromagnetic emission. 


How to Lock Down the No-Code Supply Chain Attack Surface

A core feature of no-code development, third-party connectors allow applications to interact with cloud services, databases, and enterprise software. While these integrations boost efficiency, they also create new entry points for adversaries. ... Another emerging threat involves dependency confusion attacks, where adversaries exploit naming collisions between internal and public software packages. By publishing malicious packages to public repositories with the same names as internally used components, attackers could trick the platform into downloading and executing unauthorized code during automated workflow executions. This technique allows adversaries to silently insert malicious payloads into enterprise automation pipelines, often bypassing traditional security reviews. ... One of the most challenging elements of securing no-code environments is visibility. Security teams struggle with asset discovery and dependency tracking, particularly in environments where business users can create applications independently without IT oversight. Applications and automations built outside of IT governance may use unapproved connectors and expose sensitive data, since they often integrate with critical business workflows. 


Securing Your AI Model Supply Chain

Supply chain Levels for Software Artifacts (SLSA) is a comprehensive framework designed to protect the integrity of software artifacts, including AI models. SLSA provides a set of standards and practices to secure the software supply chain from source to deployment. By implementing SLSA, organizations can ensure that their AI models are built and maintained with the highest levels of security, reducing the risk of tampering and ensuring the authenticity of their outputs. ... Sigstore is an open-source project that aims to improve the security and integrity of software supply chains by providing a transparent and secure way to sign and verify software artifacts. Using cryptographic signatures, Sigstore ensures that AI models and other software components are authentic and have not been tampered with. This system allows developers and organizations to trace the provenance of their AI models, ensuring that they originate from trusted sources. ... The most valuable takeaway for ensuring model authenticity is the implementation of robust verification mechanisms. By utilizing frameworks like SLSA and tools like Sigstore, organizations can create a transparent and secure supply chain that guarantees the integrity of their AI models. This approach helps build trust with stakeholders and ensures that the models deployed in production are reliable and free from malicious alterations.


Data center retrofit strategies for AI workloads

AI accelerators are highly sensitive to power quality. Sub-cycle power fluctuations can cause bit errors, data corruption, or system instability. Older uninterruptible power supply (UPS) systems may struggle to handle the dynamic loads AI can produce, often involving three MW sub-cycle swings or more. Updating the electrical distribution system (EDS) is an opportunity that includes replacing dated UPS technology, which often cannot handle the dynamic AI load profile, redesigning power distribution for redundancy, and ensuring that power supply configurations meet the demands of high-density computing. ... With the high cost of AI downtime, risk mitigation becomes paramount. Energy and power management systems (EPMS) are capable of high-resolution waveform capture, which allows operators to trace and address electrical anomalies quickly. These systems are essential for identifying the root cause of power quality issues and coordinating fast response mechanisms. ... No two mission-critical facilities are the same regarding space, power, and cooling. Add the variables of each AI deployment, and what works for one facility may not be the best fit for another. That said, there are some universal truths about retrofitting for AI. You will need engineers who are well-versed in various equipment configurations, including cooling and electrical systems connected to the network. 


Is it time for a 'cloud reset'? New study claims public and private cloud balance is now a major consideration for companies across the world

Enterprises often still have some kind of a cloud-first policy, he outlined, but they have realized they need some form of private cloud too, typically due to the fact that some workloads do not meet the needs, mainly around cost, complexity and compliance. However the problem is that because public cloud has taken priority, infrastructure has not grown in the right way - so increasingly, Broadcom’s conversations are now with customers realizing they need to focus on both public and private cloud, and some on-prem, Baguley says, as they're realizing, “we need to make sure we do it right, we're doing it in a cost-effective way, and we do it in a way that's actually going to be strategically sensible for us going forward.” "In essence - they've realised they need to build something on-prem that can not only compete with public cloud, but actually be better in various categories, including cost, compliance and complexity.” ... In order to help with these concerns, Broadcom has released VMware Cloud Foundation (VCF) 9.0, the latest edition of its platform to help customers get the most out of private cloud. Described by Baguely as, “the culmination of 25 years work at VMware”, VCF 9.0 offers users a single platform with one SKU - giving them improved visibility while supporting all applications with a consistent experience across the private cloud environment.


Cloud in the age of AI: Six things to consider

This is an issue impacting many multinational organizations, driving the growth for regional- and even industry clouds. These offer specific tailored compliance, security, and performance options. As organizations try to architect infrastructure that supports their future states, with a blend of cloud and on-prem, data sovereignty is an increasingly large issue. I hear a lot from IT leaders about how they must consider local and regional regulations, which adds a consideration to the simple concept of migration to the cloud. ... Sustainability was always the hidden cost of connected computing. Hosting data in the cloud consumes a lot of energy. Financial cost is most top of mind when IT leaders talk about driving efficiency through the cloud right now. It’s also at the root of a lot of talk about moving to the edge and using AI-infused end user devices. But expect sustainability to become an increasingly important factor in cloud: geo political instability, the cost of energy, and the increasing demands of AI will see to that. ... The AI PC pitch from hardware vendors is that organizations will be able to build small ‘clouds’ of end user devices. Specific functions and roles will work on AI PCs and do their computing at the edge. The argument is compelling: better security and efficient modular scalability. Not every user or function needs all capabilities and access to all data.


Creating a Communications Framework for Platform Engineering

When platform teams focus exclusively on technical excellence while neglecting a communication strategy, they create an invisible barrier between the platform’s capability and its business impact. Users can’t adopt what they don’t understand, and leadership won’t invest in what they can’t measure. ... To overcome engineers’ skepticism of new tools that may introduce complexity, your communication should clearly articulate how the platform simplifies their work. Highlight its ability to reduce cognitive load, minimize context switching, enhance access to documentation and accelerate development cycles. Present these advantages as concrete improvements to daily workflows, rather than abstract concepts. ... Tap into the influence of respected technical colleagues who have contributed to the platform’s development or were early adopters. Their endorsements are more impactful than any official messaging. Facilitate opportunities for these champions to demonstrate the platform’s capabilities through lightning talks, recorded demos or pair programming sessions. These peer-to-peer interactions allow potential users to observe practical applications firsthand and ask candid questions in a low-pressure environment.


Why data sovereignty is an enabler of Europe’s digital future

Data sovereignty has broad reaching implications with potential impact on many areas of a business extending beyond the IT department. One of the most obvious examples is for the legal and finance departments, where GDPR and similar legislation require granular control over how data is stored and handled. The harsh reality is that any gaps in compliance could result in legal action, substantial fines and subsequent damage to longer term reputation. Alongside this, providing clarity on data governance increasingly factors into trust and competitive advantage, with customers and partners keen to eliminate grey areas around data sovereignty. ... One way that many companies are seeking to gain more control and visibility of their data is by repatriating specific data sets from public cloud environments over to on-premise storage or private clouds. This is not about reversing cloud technology; instead, repatriation is a sound way of achieving compliance with local legislation and ensuring there is no scope for questions over exactly where data resides. In some instances, repatriating data can improve performance, reduce cloud costs and it can also provide assurance that data is protected from foreign government access. Additionally, on-premise or private cloud setups can offer the highest levels of security from third-party risks for the most sensitive or proprietary data.

Daily Tech Digest - June 03, 2025


Quote for the day:

"Keep your fears to yourself, but share your courage with others." -- Robert Louis Stevenson


Is it Time to Accept that the Current Role of the CISO Has Failed?

First of all, it was never conceived as a true C-level role. It probably originated in the minds of some organisation consultants, but it never developed any true C-level weight. Even if it may hurt some, it is my opinion that it was very rarely given to people with true C-level potential. Second, it was almost always given to technologists by trade or background, although the underlying matter is unequivocally cross-functional and has always been: You cannot be successful around identity and access management for example without the involvement of HR and business units, and the ability to reach credibly towards them. ... It has aggregated a mixed set of responsibilities and accountabilities without building up the right organisational and managerial momentum, and many CISOs are simply being set up to fail: The role has simply become too complex to carry for the profile of the people it attracts. To break this spiral, the logic is now to split the role, stripping off the managerial layers it has accumulated over the years and refocusing the role of the CISO on its native technical content so that it can lead effectively and efficiently at that level, while at the same time bringing up a CSO role able to reach across business, IT and support function to take in charge the level of corporate complexity cybersecurity is now amalgamating in large firms.


How to Fortify Your Business’s Online Infrastructure Against Downtime

The first step to protecting your online infrastructure against downtime is to assess just how much downtime risk is viable for your business. Understanding how much downtime you can realistically afford is important for developing a sound IT strategy. Your viable downtime limit will define your tolerance to risk and allow you to direct your resources toward systems that keep your systems running optimally as far as possible. The average accepted downtime rate for a website is just 0.05%. That means your systems should experience uptime at least 99.95% of the time. If you have a low risk tolerance – say, for instance, if you rely on an ecommerce platform to generate revenue – investing in IT continuity technology is essential for keeping downtime minimal. ... The first step to safeguarding your organization against cyberattacks is to regularly audit your network security measures. This helps to spot vulnerabilities and address them, ensuring your IT systems are always protected against continuously advancing threats. Begin by creating a map of your existing network infrastructure, including all of its user access points, hardware, and software. This map will allow you to keep track of changes and quickly identify unauthorized changes and additions.


Private cloud still matters—but it doesn’t matter most

Large enterprises will maintain significant on-premises footprints for the foreseeable future, for all the reasons we’ve discussed. The enterprise IT landscape in 2025 is undeniably hybrid and likely always will be. But it’s equally undeniable that the center of gravity for innovation has shifted. When a new opportunity emerges—say, deploying a breakthrough AI model or scaling a customer-facing app to millions of users overnight—companies aren’t spinning up a new on-premises cluster to meet the moment. They’re tapping the virtually unlimited resources of AWS, Azure, Google, or edge networks like Cloudflare. They’re doing so because cloud offers experimentation without hardware procurement, and success isn’t gated by how many servers you happen to own. Private clouds excel at running the known and steady. Public clouds excel at unleashing the unknown and extraordinary. As we reach a cloud/on-prem equilibrium, this division of labor is becoming clearer. The day-to-day workloads that keep the business running may happily live in a familiar private cloud enclave. But the industry-defining projects, the ones leaders hope will define the business’s future, gravitate to infrastructure that can stretch to any size, in any region, at a moment’s notice. 


Why Generative AI Needs Architecture, Not Just APIs

The root of the problem often lies in treating gen AI as an add-on to legacy systems rather than embedding it into core operations. This leads to inconsistent implementation, unclear ownership and limited returns. To deliver meaningful outcomes, organizations must start by identifying areas where gen AI can enhance decisions, such as customer engagement, service workflows and regulatory compliance. ... When the focus is only on launching siloed applications, organizations may move fast initially, but they end up with systems that are difficult to scale, integrate or adapt. That's where architecture-centric thinking becomes critical. A strong architectural foundation built on modularity, interoperability and scalability ensures that future applications don't just add features but add value as one needs to build to last. This means building platforms that support change, not just one-off projects. It's also about fostering collaboration between business and IT, so decisions can be made with both speed and stability in mind. ... The "situational layer cake" architecture enables enterprises to build applications in distinct layers, such as enterprisewide, division-specific and implementation layers, facilitating a balance between reusability and customization. This structure allows the creation of reusable components that can be tailored to specific business contexts without redundant coding, streamlining operations and reducing complexity.


Scattered Spider: Understanding Help Desk Scams and How to Defend Your Organization

The goal of a help desk scam is to get the help desk operator to reset the credentials and/or MFA used to access an account so the attacker can take control of it. They'll use a variety of backstories and tactics to get that done, but most of the time it's as simple as saying "I've got a new phone, can you remove my existing MFA and allow me to enroll a new one?" From there, the attacker is then sent an MFA reset link via email or SMS. Usually, this would be sent to, for example, a number on file — but at this point, the attacker has already established trust and bypassed the help desk process to a degree. So asking "Can you send it to this email address" or "I've actually got a new number too, can you send it to…" gets this sent directly to the attacker. ... But, help desks are a target for a reason. They're "helpful" by nature. This is usually reflected in how they're operated and performance measured — delays won't help you to hit those SLAs! Ultimately, a process only works if employees are willing to adhere to it — and can't be socially engineered to break it. Help desks that are removed from day-to-day operations are also inherently susceptible to attacks where employees are impersonated. But, the attacks we're experiencing at the moment should give security stakeholders plenty of ammunition as to why help desk reforms are vital to securing the business.


Banking on intelligence: How AI is powering the next evolution of financial services

With constantly evolving regulations, financial institutions need stringent compliance measures to avoid penalties and disruptions. AI steps in as a powerful ally, automating compliance tasks to slash manual workloads and boost reporting accuracy. AI agents digest regulatory data, churn out compliance reports, and handle KYC/AML validations—cutting errors while speeding up the process. While implementing the changes, financial institutions must comply with data localisation mandates and ensure AI solutions are hosted within India. To mitigate data privacy risks, personally identifiable information (PII) is anonymised, and AI is deployed within Virtual Private Cloud environments. AI systems automate document verification, ensuring consistent validation and improving audit readiness. ... AI-enabled Underwriting Workbench is an immensely helpful tool for streamlining documentation and offering a single-window interface. GenAI further enhances credit assessments by analysing alternative data—like transaction history, social media, and employment records—offering a comprehensive view of an applicant’s financial health. This enables banks to make inclusive, risk-aware lending decisions. Agentic AI further calibrates the process by automating tasks like application assessments and borrower information verifications, enabling near-instant loan decisions with minimal human intervention.


Why the end of Google as we know it could be your biggest opportunity yet

Now, before you think I'm writing Google's obituary, let me be clear. Like I've said before, I'm confident they'll figure it out, even if that means changing their business model. That said, if your business depends on Google in any way, whether it's your business profile, reviews, SEO, or products like Ad Manager to drive traffic, you need to pay attention to what's happening. ... The Department of Justice and several states are suing Google's parent company, Alphabet, arguing that its exclusive deals with companies like Apple are anticompetitive and potentially monopolistic. Basically, Google is paying billions to be the default search engine on Apple devices, effectively shutting out any real competition. The ruling in this case could break up their reported $20 billion-a-year agreement. ... Long story short, the way people discover, research, and choose businesses is changing one AI update at a time, but it's essential to note that people are still searching, just not in the same places they used to. That nuance is critical to understanding your next move. As more users turn to AI tools like ChatGPT and Perplexity for answers, traditional search engines are no longer the only gateway to your business. This shift in behavior over time will result in less traffic to your product or service. 


How global collaboration is hitting cybercriminals where it hurts

Collaboration and intelligence sharing is at the heart of our approach to tackling the threat within the NCA, and we enjoy relationships with partners across the public and private sector both nationally and internationally. We’re united and motivated, in many ways, by a common mission. Some of these are formalised law enforcement relationships that we have had for a long time – for example, I was the NCA’s embed to the FBI in Washington DC for a number of years. But, it is not just limited to the US – the NCA is lucky to enjoy brilliant relationships with the ‘five eyes’ countries and partners across Europe and beyond in the fight against cybercrime. ... In the NCA, we are predominantly focused on financially motivated cybercrime, with ransomware as a main area of focus given how significant the threat it poses to the UK. We recognise that some cybercrime groups have connections to the Russian State, but assess that these type of deep-rooted relationships are likely to be the exception as opposed to the norm. When targeting the cybercrime threat, we have been focused on associating cost and risk to the threat actors who seek to cause harm to us and our allies, and we achieve this in a number of different ways. The NCA-led disruption of LockBit in 2024 was successful in undermining trust between members of the group, as well as any trust that victims might have had in LockBit keeping their word. 


Future-Proofing AI: Repeating Mistakes or Learning From the Past?

Are the enterprises rushing to deploy new open source AI projects taking the necessary security measures to isolate them from the rest of their infrastructure? Or are they disregarding recent open source security history and trusting them by default? Alarmingly, there are also reports that China-, North Korea- and Russia-based cybercriminal groups are actively targeting both physical and AI infrastructure while leveraging AI-generated malware to exploit vulnerabilities more efficiently. ... Next-generation AI infrastructure cannot be beholden to performance penalties that arise from using today’s solutions to create true, secure, multitenant environments. By combining the best aspects of bare-metal performance with container-like deployment models, organizations can build systems that deliver both speed and convenience. ... We cannot build a solid future if we ignore the wisdom of the past. The foundations of computing security, resource management and operational efficiency were laid decades ago by pioneers who had to make every CPU cycle and memory byte count. Their lessons are more relevant now than ever as we build systems that consume unprecedented computational resources. The organizations that will outlast in the AI era won’t necessarily be those with the largest infrastructure investments or the trendiest technology stacks. 


Eight ways storage IT pros can evolve in the age of analytics and AI

Large organizations are spending millions of dollars annually on data storage, backups, and disaster recovery. On balance, there’s nothing wrong with that since data is the center of everything today – but all data should not be treated the same. Using cost modeling tools, the storage manager can enter actual storage costs to determine upfront new projected storage costs and actual usable capacity, based on data growth rates. These costs must factor in backups and disaster recovery, which can be 3X of storage spending, and should compare on-premises versus cloud models. An unstructured data management system that indexes all data across all storage can supply metrics on data volumes, costs, and predicted costs, and then model plans for moving less-active data to lower-cost archival storage, such as in the cloud. ... Storage teams must mitigate ransomware risks associated with file data. One way to do this is by implementing hybrid tiering strategies that offload infrequently accessed (cold) files to immutable cloud storage, which reduces the active attack surface by as much as 70 or 80 percent. Immutable storage ensures that once data is written, it cannot be altered or deleted, providing a robust defense against ransomware attempts to encrypt or corrupt files.

Daily Tech Digest - January 03, 2025

Tech predictions 2025 – what could be in store next year?

In 2025, we will hear of numerous cases where threat actors trick a corporate Gen AI solution into giving up sensitive information and causing high-profile data breaches. Many enterprises are using Gen AI to build customer-facing chatbots, in order to aid everything from bookings to customer service. Indeed, in order to be useful, LLMs must ultimately be granted access to information and systems in order to answer questions and take actions that a human would otherwise have been tasked with. As with any new technology, we will witness numerous corporations grant LLMs access to huge amounts of potentially sensitive data, without appropriate security considerations. ... The future of work won’t be a binary choice between humans or machines.  It will be an “and.” AI-powered humanoids will form a part of the future workforce, and we will likely see the first instance happen next year. This will force companies to completely reimagine their workplace dynamics – and the technology that powers them. ... At the same time, organisations must ensure their security postures keep pace. Not only to ensure the data being processed by humanoids is kept safe, but also to keep the humanoids safeguarded from hacking and threatening tweaks to their software and commands. 


7 Private Cloud Trends to Watch in 2025

A lot of organizations are repatriating workloads to private cloud from public cloud, but Rick Clark, global head of cloud advisory at digital transformation solutions company UST warns they aren’t giving it much forethought, like they did earlier when migrating to public clouds. As a result, they’re not getting the ROI they hope for. “We haven’t still figured out what is appropriate for workloads. I’m seeing companies wanting to move back the percentage of their workload to reduce cost without really understanding what the value is so they’re devaluing what they're doing,” says Clark. ... Artificial intelligence and automation are also set to play a crucial role in private cloud management. They enable businesses to handle growing complexity by automating resource optimization, enhancing threat detection, and managing costs. “The ongoing talent shortage in cybersecurity makes [AI and automation] especially valuable. By reducing manual workloads, AI allows companies to do more with fewer resources,” says Trevor Horwitz, CISO and founder at cybersecurity, consulting, and compliance services provider. ... Security affects all aspects of a cloud journey, including the calculus of when and where to use private cloud environments. One significant challenge is making sure that all layers of the stack have detection and response capability.


Agility in Action: Elevating Safety through Facial Recognition

Facial Recognition Technology (FRT) stands out as a leading solution to these problems, protecting not only the physical boundaries but also the organization’s overall integrity. Through precise identity verification and user validation, FRT considerably lowers the possibility of unauthorized access. Organizations, irrespective of size, can benefit from this technology, which offers improved security and operational effectiveness. ... A comprehensive physical security program with interconnected elements serves as the backbone of any security infrastructure. Regulating who can enter or exit a facility is vital. Effective systems include traditional mechanical methods, such as locks and keys, as well as electronic solutions like RFID cards. By using these methods, only authorized persons are able to enter. Nonetheless, a technological solution that works with many Original Equipment Manufacturers (OEMs) is required to successfully counter today’s dangers. In addition to guaranteeing general user convenience, this technology should give top priority to data privacy and safety compliance.
Effective physical security is built on deterring unauthorized entry and identifying people of interest. This can include anything from physical security personnel to surveillance and access control systems.


Strategies for Managing Data Debt in Growing Organizations

Not all data debt is created equal. Growing organizations experiencing data sprawl at an expanding rate must conduct a thorough impact assessment to determine which aspects of their data debt are most harmful to operational efficiency and strategic initiatives. An effective approach involves quantifying the potential risks associated with each type of debt – such as compliance violations or lost customer insights – and calculating the opportunity cost of maintaining versus mitigating them. ... A core approach to managing data debt is to establish strong data governance practices that address inconsistencies and fragmentation. Before anything else, you must establish an adequate access control system and ensure its imperviousness. Next, you must think about implementing robust validation mechanisms that will help prevent further debt accumulation. Data governance frameworks provide a foundation for minimizing ad hoc fixes, which are the primary drivers of data debt. ... An architectural shift that facilitates scalability can help avoid the bottlenecks that arise when data outgrows its infrastructure. Technologies like cloud platforms offer scalability without heavy up-front investments, allowing organizations to expand their capacity in line with their growth.


Secure by design vs by default – which software development concept is better?

The challenge here is that, while from a security perspective we may agree that it is wise, it could inevitably put developers and vendors at a competitive disadvantage. Those who don’t prioritize secure-by-design can get features, functionality, and products out to market faster, leading to potentially more market share, revenue, customer attraction/retention, and more. Additionally, many vendors are venture-capital backed, which comes with expectations of return on investment — and the reality that cyber is just one of many risks their business is facing. They must maintain market share, hit revenue targets, deliver customer satisfaction, raise brand awareness/exposure, and achieve the most advantageous business outcomes. ... Secure-by-default development focuses on ensuring that software components arrive at the end-user with all security features and functions fully implemented, with the goal of providing maximum security right out of the box. Most cyber professionals have experienced having to apply CIS Benchmarks, DISA STIGs, vendor guidance and so on to harden a new product or software to ensure we reduce its attack surface. Secure-by-default flips that paradigm on its head so that products arrive hardened and require customers to roll back or loosen the hardened configurations to tailor them to their needs.


The modern CISO is a cornerstone of organizational success

Historically, CISOs focused on technical responsibilities, including managing firewalls, monitoring networks, and responding to breaches. Today, they are integral to the C-suite, contributing to decisions that align security initiatives with organizational goals. This shift in responsibilities reflects the growing realization that security is not just an IT function but a critical enabler of business goals, customer trust, and competitive advantage. CISOs are increasingly embedded in the strategic planning process, ensuring that cybersecurity initiatives support overall business goals rather than operate as standalone activities. ... One of the most critical aspects of the modern CISO role is integrating security into operational processes without disrupting productivity. This involves working closely with operations teams to design workflows prioritizing efficiency and security. This aspect of their responsibility ensures that security does not become a bottleneck for business operations but enhances operational resilience, efficiency, and productivity. ... The CISO of tomorrow will redefine success by aligning cybersecurity with business objectives, fostering a culture of shared responsibility, and driving resilience in the face of emerging risks like AI-driven attacks, quantum threats, and global regulatory pressures.


Key Infrastructure Modernization Trends for Enterprises

Cloud providers and data centers need advanced cooling technologies, including rear-door heat exchange, immersion and direct-to-chip systems. Sustainable power sources such as solar and wind must supplement traditional energy resources. These infrastructure changes will support new chip generations, increased rack densities and expanding AI requirements while enabling edge computing use cases. "Liquid cooling has evolved to move from cooling the broader data center environment to getting closer and even within the infrastructure," Hewitt said. "Liquid-cooled infrastructure remains niche today in terms of use cases but will become more predominant as next generations of GPUs and CPUs increase in power consumption and heat production." ... Document existing business processes and workflows to improve visibility and identify gaps suitable for AI implementation. Organizations must organize data for AI tools that can bring in improvements, keep track of where the data resides to organize it for AI use, build internal guidelines for training and testing AI-driven workflows, and create robust controls for processes that incorporate AI agents.


Being Functionless: How to Develop a Serverless Mindset to Write Less Code!

As the adoption of FaaS increased, cloud providers added a variety of language runtimes to cater to different computational needs, skills, etc., offering something for most programmers. Language runtimes such as Java, .NET, Node.js, Python, Ruby, Go, etc., are the most popular and widely adopted. However, this also brings some challenges to organizations adopting serverless technology. More than technology challenges, these are mindset challenges for engineers. ... Sustainability is a crucial aspect of modern cloud operation. Consuming renewable energy, reducing carbon footprint, and achieving green energy targets are top priorities for cloud providers. Cloud providers invest in efficient power and cooling technologies and operate an efficient server population to achieve higher utilization. For this reason, AWS recommends using managed services for efficient cloud operation, as part of their Well-Architected Framework best practices for sustainability. ... For engineers new to serverless, equipping their minds to its needs can be challenging. Hence, you hear about the serverless mindset as a prerequisite to adopting serverless. This is because working with serverless requires a new way of thinking, developing, and operating applications in the cloud. 


Unlocking opportunities for growth with sovereign cloud

Although there is no standard definition of what constitutes a “sovereign cloud,” there is a general understanding that it must ensure sovereignty at three fundamental levels: data, operations, and infrastructure. Sovereign cloud solutions, therefore, have highly demanding requirements when it comes to digital security and the protection of sensitive data, from technical, operational, and legal perspectives. The sovereign cloud concept also opens up avenues for competition and innovation, particularly among local cloud service providers within the UK. In a recent PwC survey, 78% of UK business leaders said they have adopted cloud in most or all areas of their organisations. However, many of these cloud providers operate and function outside of the country, usually across the pond. The development of sovereign cloud offerings provides the perfect push for UK cloud service providers to increase their market share, providing local tools to power local innovation. For a large-scale, accessible, and competitive sovereign cloud ecosystem to emerge, a combination of certain factors is essential. Firstly, partnerships are crucial. Developing local sovereign cloud solutions that offer the same benefits and ease of use as large hyperscalers is a significant challenge.


The Tipping Point: India's Data Center Revolution

"Data explosion and data localization are paving the way for a data center revolution in India. The low data tariff plans, access to affordable smartphones, adoption of new technologies and growing user base of social media, e-commerce, gaming and OTT platforms are some of the key triggers for data explosion. Also, AI-led demand, which is expected to increase multi-fold in the next 3-5 years, presents significant opportunities. This, coupled with favourable regulatory policies from the Central and State governments, the draft Digital Personal Data Protection Bill, and the infrastructure status are supporting the growth prospects," said Anupama Reddy, Vice President and Co-Group Head - Corporate Ratings, ICRA. ... The high-octane data center industry comes with its own set of challenges. The data center industry faces high operational costs alongside challenges in scalability, cybersecurity, sustainability, and skilled workforce. Power and cooling are major cost drivers, with data centers consuming 1-1.5 per cent of global electricity. Advanced cooling solutions and energy-efficient hardware can help reduce energy costs while supporting environmental goals.



Quote for the day:

"In the end, it is important to remember that we cannot become what we need to be by remaining what we are." -- Max De Pree

Daily Tech Digest - March 23, 2020

You Need to Know SQL Temporary Table


We have been warned to NOT write any business logic in databases using triggers, stored procedures, and so on. It doesn’t mean we don’t need to know database systems. Being competent in database systems could save us a lot of work. For example, managers or customers often send us an email or a short notice asking for some one-off reports. Then we need to quickly log into the database servers and generate reports with either a list of parameters or a CSV file from requesters. ... There are two types of temporary tables: local and global temporary tables. Both of them share similar behaviors, except that the global temporary tables are visible across sessions. Moreover, the two types of temporary tables have different naming rules: local temporary tables should have names that start with a hash symbol (#); while the names of global temporary tables should start with two hash symbols (##). All temporary tables are stored in System Databases -> tempdb -> Temporary Tables.



Remote work tests corporate pandemic plans


IT leaders across the country are shifting gears from accommodating short-term remote work strategies for snowstorms, hurricanes and other natural disasters to how to help workers plan for and remain productive in a longer-term remote work environment. Due to the duration of the pandemic, Miami-based ChenMed, an operator of 60 senior health centers in the eastern U.S., intends to offer the small number of 2,500 users who don't have a laptop, such as front desk staff, the opportunity to take home their desktops so they can continue to answer patient calls and conduct other business. "Yes, it creates a lot more complexity in helping users set that up, but we want them to have a great experience versus trying to use an old computer at home," CIO Hernando Celada said. This strategy gives him confidence that the machines will be secure when the time comes for workers to be sent home, which will be at the first sign of community spread of the virus because ChenMed's patient population is the most vulnerable.


Private cloud reimagined as equal partner in multi-cloud world

hybrid cloud
Forrester's Gardner argues that repatriation is not a broad trend. "It's simply not true," he says. There may be some companies moving a specific application back to the private cloud for performance, regulatory or data gravity reasons, but repatriation is a relatively isolated phenomenon. The latest Gartner thinking on repatriation is in agreement with Gardner. "Contrary to market chatter that customers are abandoning the public cloud, consumption continues to grow as organizations leverage new capabilities to drive transformation. Certain workloads with low affinities to public cloud may be repatriated, largely because the migrations were not sufficiently thought through. But few organizations are wholly abandoning the public cloud at any technology layer," reads a 2019 Gartner report from analysts Brandon Medford, Sid Nag and Mike Dorosh. Warrilow says flatly, "Repatriation in net terms is not happening." He adds that there will always be a small number of workloads that go back to the private cloud as part of an organization's ongoing evaluation of the best landing spot for specific workloads.


What’s New in SQL Monitor 10?

SQL Monitor does the best job it can, out of the box, of setting up a useful core set of metrics and alerts, with sensible thresholds. However, the right alerts and the right thresholds are 100% dependent on your systems. A group or class of servers may all need the same alert types with the same thresholds, but these may well be different from those for other classes of server. Also, your group of VMWare-based servers, for example, may need different thresholds than your bare-metal servers for the same set of memory-related alerts. Configuring all this in the GUI, server-by-server, can be time consuming and it’s easy to introduce discrepancies. This alert configuration task, just like any other SQL Server management or maintenance task should be automated. With the PowerShell API, you now write PowerShell scripts to set up the alerts on a machine in a way that is exactly in accordance with your requirements. You then use that as a model to copy all the settings to other machines, or just groups of machines.


Can APIs be copyrighted?

Can APIs be copyrighted?
The law is very clear about copyright. If a programmer writes down some code, the programmer owns the copyright on the work. The programmer may choose to trade that copyright for a paycheck or donate it to an open source project, but the decision is entirely the programmer’s. An API may not be standalone code, but it’s still the hard work of a person. The programmers will make many creative decisions along the way about the best or most graceful way to share their computational bounty. ... APIs are purely functional and the copyright law doesn’t protect the merely functional expressions. If you say “yes” to a flight attendant offering you coffee, you’re not plagiarizing or violating the copyright of the ancient human who coined the word “yes.” You’re just replying in the only way you can. Imagine if some clever car manufacturer copyrighted the steering wheel and the location of the pedals. The car manufacturers have plenty of ways to get creative about fins and paint colors. Do they need to make it impossible to rent or borrow a car without a lesson on how to steer it? The law recognizes that there are good reasons not to allow copyright to control functional expressions.


From Zero to Hero: CISO Edition

With new attacks forming faster than the technologies to fight them, holding CISOs to an entirely unrealistic standard doesn’t actually serve anyone. The truth is that no matter how many technologies are deployed or how good the security posture is, 100% protection from cyberattacks is simply not possible. Perhaps senior leadership and boards of directors are finally starting to acknowledge this fact, or perhaps they're starting to realize that a successful response to an attack, along with actions by other parts of the organization, contribute to the ultimate scale and scope of the event. CISOs are uniquely capable of gauging cyber-risk and how to reduce it. Experienced CISOs understand the threats their companies face and know how to deploy the optimal mix of people, processes, and technologies, weighed against threats, to provide the best possible level of protection. Organizations that understand this are leading the charge in shifting the perception of the CISO from technical manager to strategic risk leader.


Most common cyberattacks we'll see in 2020


By convincingly impersonating legitimate brands, phishing emails can trick unsuspecting users into revealing account credentials, financial information, and other sensitive data. Spear phishing messages are especially crafty, as they target executives, IT staff, and other individuals who may have administrative or high-end privileges. Defending against phishing attacks requires both technology and awareness training. Businesses should adopt email filtering tools such as Proofpoint and the filtering functionality built into Office 365, said Thor Edens, director of Information Security at data analytics firm Babel Street. Business-focused mobile phishing attacks are likely to spread in 2020, according to Jon Oltsik, senior principal analyst for market intelligence firm Enterprise Strategy Group. As such, IT executives should analyze their mobile security as part of their overall strategy. "Spam filters with sandboxing and DNS filtering are also essential security layers because they keep malicious emails from entering the network, and protect the user if they fall for the phishing attempt and end up clicking on a malicious hyperlink," said Greg Miller, owner of IT service provider CMIT Solutions of Orange County.


Las Vegas shores up SecOps with multi-factor authentication


Las Vegas initially rolled out Okta in 2018 to improve the efficiency of its IT help desk. Sherwood estimated the access management system cut down on help desk calls relating to forgotten passwords and password resets by 25%. The help desk also no longer had to manually install new applications for users because of an internal web portal connected to Okta that automatically manages authorization and permissions for self-service downloads. That freed up help desk employees for more strategic SecOps work, which now includes the multi-factor authentication rollout. Another SecOps update slated for this year will add city employees' mobile devices to the Okta identity management system, and an Okta single sign-on service for Las Vegas citizens that use the city's web portal. Residents will get one login for all services under this plan, Sherwood said. "If they get a parking citation and they're used to paying their sewer bill, it's the same login, and they can pay them both through a shopping cart."


Coronavirus challenges capacity, but core networks are holding up

A stressed employee works alone in a dimly lit office.
Increased use of conferencing apps may affect their availability for reasons other than network capacity. For example, according to ThousandEyes, users around the globe were unable to connect to their Zoom meetings for approximately 20 minutes on Friday due to failed DNS resolution. Others too are monitoring data traffic looking for warning signs of slowdowns. “Traffic towards video conferencing, streaming services and news, e-commerce websites has surged. We've seen growth in traffic from residential broadband networks, and a slowing of traffic from businesses and universities," wrote Louis Poinsignon a network engineer with CloudFlare in a blog about Internet traffic patterns. He noted that on March 13 when the US announced a state of emergency, CloudFlare’s US data centers served 20% more traffic than usual. Poinsignon noted that Internet Exchange Points, where Internet service providers and content providers can exchange data directly (rather than via a third party) have also seen spikes in traffic. For example, Amsterdam (AMS-IX), London (LINX) and Frankfurt (DE-CIX), a 10-20% increase was seen around March 9.



With a large segment of the population confined to their homes having to consume bandwidth, the internet free-for-all we have enjoyed to date is all but done. Emergency legislation or an executive order needs to be enacted to limit video content streaming to 720p across all content services, such as from Netflix, Hulu, Apple TV, Disney+, YouTube, and other providers. Traffic prioritization and shaping need to be put in place for core business applications during prime hours, which includes video conferencing for business and personal use. This would effectively be the opposite of net neutrality, as an emergency measure. Internet video streaming traffic should be prioritized for essential news providers, and the government should provide incentives for them to broadcast their content (and for home-bound citizens to consume it) over-the-air (OTA) so that additional bandwidth can be freed up. Remember the antenna and devices with built-in tuners? It may be an appropriate time to shift some programming back to the airwaves, and even bring back the DVR, so that programming can be transferred to devices during off-hours when networks aren't saturated.



Quote for the day:


"Individual commitment to a group effort - that is what makes a team work, a company work, a society work, a civilization work." -- Vince Lombardi


August 17, 2016

How to develop a cloud-first architecture and strategy

The first step is to build skills and assess applications. To create your cloud team and assess application readiness, your organization must transform. IT is becoming a broker for cloud services, and the role of cloud architect is a big part of that. Gartner used to ask if an organization could take the risk of moving the cloud, but the question is no longer about "if," Cancila said. The question now is where you are moving and how are you going to get there. The next step in the process is to select cloud providers and services. Consider the different layers of the cloud (SaaS, PaaS, and IaaS) and how they fit into your organization's goals. Also, assess your app architecture and infrastructure.


Why Private Clouds Will Suffer A Long Slow Death

While private cloud proponents have spent the last five years focusing on getting their IaaS offerings working, the big three cloud providers have moved way beyond core computing services. They’re delivering the services IT groups will need in the future to keep their companies from being eaten by software. Google, although its revenue is still small in comparison to AWS and Azure, offers an incredibly interesting machine learning set of services. I’ve worked with them, and they offer tremendous power at an affordable price, delivered in an easy-to-use framework. It’s clear we’re at the beginning of an AI-powered revolution, and Google is staking its claim to be the pioneer in the field, as demonstrated by its Deep Mind offering defeating the world’s champion Go player.


Intel’s New Mission: Find Fresh Uses for Its Famous Paranoia

Silicon Valley treats Moore’s Law as if it is immutable, and with even more reverence than it does paranoia. But it was not a scientific law; it was always an observation about the behavior of a market for computers and software, which paid off at a rate to justify increasing investment in making chips. It is changing, Mr. Krzanich said, because phones, sensors and cloud systems develop at different rates. “It’s lengthened to 24 to 36 months,” he said. “The performance of the ecosystem is much more than Moore’s Law.” That is why Intel is in the wireless and networking fields, and is working on a new kind of three-dimensional memory chip, which Mr. Krzanich said would be out at the end of this year, that can speed performance of big-data-type calculations sevenfold.


Ransomware-as-a-service allows wannabe hackers to cash-in on cyber extortion

The availability of Cerber to anyone who wants to pay for it differentiates it from another of the most successful ransomware families, Locky. "Locky is only being sent by one threat actor -- they use it on their own and don't share or sell it. Cerber acts as ransomware-as-a-service -- those who created it are now leasing it for anyone to use," says Horowitz. That arguably makes Cerber more dangerous than Locky because each affiliate user can infect victims using a variety of different attack methods, although the two most common involve the victim unknowingly executing a malicious program disguised as a legitimate file, delivered in a phishing email, or the victim is infected browsing a compromised website. Researchers believe there are currently over 150 active Cerber campaigns targeting users in 201 countries, with victims in South Korea, the US, and Taiwan accounting for over half of ransom payments.


Visa Alert and Update on the Oracle Breach

“Oracle’s silence has been deafening,” said Michael Blake, chief executive officer at HTNG, a trade association for hotels and technology. “They are still grappling and trying to answer questions on the extent of the breach. Oracle has been invited to the last three [industry] calls this week and they are still going about trying to reach each customer individually and in the process of doing so they have done nothing but given the lame advice of changing passwords.” The hospitality industry has been particularly hard hit by point-of-sale compromises over the past two years. Last month, KrebsOnSecurity broke the news of a breach at Kimpton Hotels. Kimpton joins a long list of hotel brands that have acknowledged card breaches over the last year, including Trump Hotels, Hilton, Mandarin Oriental, and White Lodging, Starwood Hotels and Hyatt.


Forget two-factor authentication, here comes context-aware authentication

Contextual access is, at its essence, an evolution of adaptive authentication that replaces the use of static rules and blacklists with machine learning to assess risk based on user behavior and context. Indeed, many providers already do super simplistic “context,” such as blacklisted locations. These approaches. however, are far too coarse to be effective at balancing security with usability. At the same time, 2FA adoption is hard -- users have to install an app or use insecure SMS. In fact, the U.S. government announced that it is set to phase out text-based 2FA. But contextual authentication can sit in the background and simply do its thing pretty much invisibly (unless higher risk is determined).


Whaling Goes After the Big Phish

Successful whaling attempts are so believable and seemingly trustworthy that executives who should probably know better are clicking on links and attachments that appear to be from fellow executives, employees or business partners. One stellar example of this includes a senior executive with a security firm who received an email that appeared to be from an underling but was actually from a whaler. He was tricked into giving up employee W-2 data. Another incident involved an executive from a major soft drink company that was in talks to choose a bottler in a highly profitable, under-serviced country. Before negotiations were completed, someone working under the executive was spear phished, and the whaler was able to harvest all email related to the negotiations, jeopardizing the talks and putting the company at a distinct disadvantage.


Serverless computing: The smart person's guide

Unlike a cloud application where code is structured in a more monolithic fashion and may handle several tasks, code running on serverless services like Lambda is more typical of that found in a microservices software architecture. Under this model, applications are broken down into their core functions, which are written to be run independently and communicate via API. These small functions run by serverless services are triggered by what are called events. Taking Lambda as an example, an event could be a user uploading a file to S3 or a video being placed into an AWS Kinesis stream. The Lambda function runs every time one of these relevant events is fired. Once the function has run the cloud service will spin down the underlying infrastructure.


NSA Hacked? Top Cyber Weapons Allegedly Go Up For Auction

Although the exploits were poorly coded, “nonetheless, this appears to be legitimate code,” Matt Suiche, CEO of cyber security startup Comae Technologies added. Virginia-based Risk Based Security has also looked at the sample files and said that one of the exploits contains an IP address registered by the U.S. Department of Defense. None of this means that the NSA has been hacked. The Shadow Brokers may have simply come across a compromised system that was hosting the exploits, Risk Based Security said in a blog post. It's also possible the Shadow Brokers are promoting a big scam. Deception-based schemes are very common in hacking, Risk Based Security added. The NSA hasn't acknowledged any ties with Equation Group and on Monday, it didn't respond for comment.


Don't Ditch SMS, But Change the Way You Use It

Ditching text messaging and shifting to a new form of authentication would likely confuse customers, security experts say. Instead, financial institutions should take a more nuanced approach, said Rich Rezek, vice president of market development for authentication solutions for the tech vendor Early Warning. SMS-based authentication "will still remain a tool in the tool kit" since it's inexpensive and simple for banks to set up, and something consumers are familiar with, Rezek said. But banks still must need to take steps to improve how they handle two-factor authentication and SMS. "As fraudsters start to figure out [an authentication method], then you have to evolve and take the next approach," Rezek said. Common ways for a criminal to compromise an SMS authenticator include remotely hacking a phone and having the texts forward to a different phone, or to a computer via voice over internet protocol, Rezek said.



Quote for the day:


“Things work out best for those who make the best of how things work out.” -- John Wooden


October 21, 2014

Good Strategy/ Bad Strategy (Richard Rumelt, 2011)
It is because crafting a good strategy takes a lot of discipline. Most managers mistakenly take strategy work as an exercise in goal setting rather than problem solving. A bad strategy is often characterized by being full of fluff, as it fails to face the challenge, mistakes goals for strategy, and comprises of bad strategic objectives (mostly misguided or impractical). Talking about the prevalence of bad strategies, the author quips that- "if you fail to identify and analyze the obstacles, you don't have a strategy. Instead, you have either a stretch goal, or budget, or a list of things you wish would happen"


Technology and Inequality
Brynjolfsson lists several ways that technological changes can contribute to inequality: robots and automation, for example, are eliminating some routine jobs while requiring new skills in others (see “How Technology is Destroying Jobs”). But the biggest factor, he says, is that the technology-driven economy greatly favors a small group of successful individuals by amplifying their talent and luck, and dramatically increasing their rewards. Brynjolfsson argues that these people are benefiting from a winner-take-all effect originally described by Sherwin Rosen in a 1981 paper called “The Economics of Superstars.”


Building Culture Is Always Better Than Trying to Transform It
A strengths-based approach to organizational culture is, in part, a matter of perspective. Instead of seeing the cultural glass as half empty, we see it as half full. Instead of carping on about everything that’s wrong with the organizational culture, we focus on everything that’s right. We should work with culture, instead of against it. ... But where traditional culture change often focuses on stopping old practices and starting new ones, a strengths-based approach to managing culture would instead concentrate its efforts on figuring out how to better use — amplify, optimize, intensify — the culture’s most helpful existing attributes


Doctor Who and the Dalek: 10-year-old tests BBC programming game
He’s a VB programmer (be gentle, he’s only 10), which is part of the problem schools face in teaching coding; they are supposed to teaching coding before the idea of a variable has appeared in maths. To get past this, the Doctor Who creative team have used a similar look and feel to Scratch, already in widespread use in schools to introduce coding. Although as an IT pro you take pride in mastering cryptic error messages, like “NULL pointer is not NULL at line -1” (yes, I’ve had that one), it can put off the average eight-year-old. The “Make it Digital” agenda is that every child should code, not just the smart ones, so as in Scratch, it is actually impossible to have a syntax error.


Devops has moved out of the cloud
Continuous everything is a part of the devops process, where devops is the fusing of software development (dev) with IT operations (ops). The core notion is to release high-quality code and binaries that perform well and are of good quality, and to do so much more rapidly than traditional approaches to development, testing, and deployment would allow. Many people attribute the rise of devops directly to the growth of cloud computing. The connection: It’s easy to continuously update cloud applications and infrastructure.


Health IT Interoperability Up To Market, Say Feds
One of their biggest recommendations is the immediate need within the health industry for standard, public application programming interfaces that allow disparate health systems to speak with one another. Such APIs are critical to enabling the interoperability required for electronic health information exchanges. "We believe that a standards-based API, combined with appropriate incentives to encourage vendors to implement the API and providers to enable access to their data via the API has potential to move interoperability forward dramatically," McCallie said in emailed comments.


The Benefits of an Application Policy Language in Cisco ACI: Part 4
Though the DevOps approach of today—with its notable improvements to culture, process, and tools—certainly delivers many efficiencies, automation and orchestration of hardware infrastructure has still been limited by traditional data center devices, such as servers, network switches and storage devices. Adding a virtualization layer to server, network, and storage, IT was able to divide some of these infrastructure devices, and enable a bit more fluidity in compute resourcing, but this still comes with manual steps or custom scripting to prepare the end-to-end application infrastructure and its networking needs used in a DevOps approach.


Why Apple Pay Is the Perfect Example of the Hummingbird Effect
Apple Pay will work at retail stores but it could also become the defacto standard for online purchases that add an extra security step--namely, proving your identity using the Touch ID fingerprint reader. I'm impressed with how fluid it works even at launch. There's a good lesson here for small businesses, beyond the fact that it's important to follow these tech trends and start preparing for the inevitable. In his book How We Got To Now, author Steven Johnson explains how breakthroughs in science and technology often lead to what he calls the "hummingbird effect"--essentially, a way to "piggyback" ideas on top of one another that helps catapult them into mainstream consciousness.


Best Practices for Moving Workloads to the Cloud
The adoption of cloud architecture is a process that requires strong effort for the entire enterprise. Every function, application and data have to be moved to the cloud; for this reason, it is necessary to have a strong commitment from the management. Top management is responsible for the harmonious growth of the company, and technology represents a key factor for business development today. Managers have to establish reasonable goals for adopting the cloud computing paradigm. A migration to the cloud requires a team effort to plan, design, and execute all the activities to move the workloads to the new IT infrastructure.


Crafting a secure data backup strategy on a private cloud
Backing up data is not something to be taken lightly, and a repercussion of data loss could be significant financial loss. Frequently, companies are unaware that they don't have a backup strategy in place, or that their backup product is not working properly. More often than not, this is because companies aren't devoting the necessary resources to create a proper backup strategy. Even if they do, they expect the backup product to work indefinitely. Unfortunately most things have an expiration date; the backup strategy is not any different.



Quote for the day:

"Leadership, on the other hand, is about creating change you believe in." -- Seth Godin

September 25, 2014

In Evolving Healthcare Business Model, Tech Plays Vital Role
This scenario mirrors what happened in the banking industry in the 1990s, when independent banks sold out to "super-regional" firms in large part to be able to afford the move to a common IT platform. Many of the health system transactions Hagood has seen in the last two years have included specific IT commitments – namely, migrating to a common system (typically Meditech, Cerner or Epic) and subsequently taking advantage of group licensing.


Digital Business Technologies Dominate Gartner 2014 Emerging Technologies Hype Cycle
As you leave for work in the morning, your house automatically turns down the heat and places an order for milk (connected home) and your virtual personal assistant (VPA) alerts you that Cindy will be late to your 9 a.m. meeting and besides, the forecast you prepared has already changed (big data). You allow your car to navigate the traffic to your office (smart machines and Internet of Things [IoT]) while you manage the latest crisis. In this scenario, much of the possibility stems from the growth of digital business and continued adoption of the related technologies as they move through the Gartner 2014 Emerging Technologies Hype Cycle.


Everything you need to know about the Shellshock Bash bug
The risk centres around the ability to arbitrarily define environment variables within a Bash shell which specify a function definition. The trouble begins when Bash continues to process shell commands after the function definition resulting in what we’d classify as a “code injection attack”. Let’s look at Robert’s example again and we’ll just take this line: http-header = Cookie:() { :; }; ping -c 3 209.126.230.74 The function definition is () { :; }; and the shell command is the ping statement and subsequent parameters. When this is processed within the context of a Bash shell, the arbitrary command is executed.


Understanding Partitioned Indexes in Oracle 11g
By breaking an index into multiple physical pieces, you are accessing much smaller pieces (faster), and you may separate the pieces onto different disk drives (reducing I/O contention). Both b-tree and bitmap indexes can be partitioned. Hash indexes cannot be partitioned. Partitioning can work several different ways. The tables can be partitioned and the indexes are not partitioned; the table is not partitioned but the index is; or both the table and index are partitioned. Either way, the cost-based optimizer must be used. Partitioning adds many possibilities to help improve performance and increase maintainability.


Backing Up SaaS: The Challenge
However, replication or backup doesn't protect against hack attacks, which suggest that data should be moved off to a cloud-based archiving service such as AWS Glacier. Even there, the risk of a really smart hacker going after the final tape copy exists. It will take a bit longer, but we've seen recent situations, such as Target's debacle, where the hackers had weeks of access. Someone needs to write-prevent that tape, either by offlining it or by clicking the write-protect button.


Rackspace Re-architects its OpenStack Private Clouds
With the new release Rackspace also decided to get away from Open vSwitch, the open source software defined network platform used as a plug-in for Neutron (the network element of OpenStack), conceding it was not quite ready for production and high-volume workloads. The team switched to traditional hardware load balancers and firewalls, still leveraging Neutron SDN capabilities within customer clouds. These and other changes have enabled Rackspace to offer “four nines” API Service Level Agreements for each of the core OpenStack services.


Don’t Overlook the Operationalization of Big Data, Pivotal Says
“We’re at the cusp of a tectonic shift in how organizations manage data,” Mongo vice president of business development Matt Asay said at the time. “It’s such a big opportunity it’s frankly far too big for any one company.” The folks at Pivotal might not totally agree with that assessment. While Cloudera and Mongo are working on connectors and joint solutions, the EMC spinoff–which owns its own in-memory, NoSQL database called GemFire–is looking to provide an all-in-one, soup-to-nuts big data solution.


Data Science That Makes a Difference
In a world filled with dangerous individuals who fund terrorist activities and imperil lives, data has helped the world's banks learn more about their customers and share watch lists to flag signs of trouble. This is a tremendous initiative to help fight money-laundering all around the world. As a result of data analytics and technology, financial institutions can be more confident that they are doing business with people and businesses they know, and they can vet customers regardless of where they are. Such collaborations across geographies are helping solve major problems in global business.


Android smartwatches to retail at average of $US30 by 2015: Gartner
The worldwide smartwatch market is poised for lift off and could gobble up 40 per cent of the consumer wristwatch market by 2016. That according to Gartner anlaysts, who have also predicted Android-based smartwatches could retail at an average price of $30 by 2015 as OEMs capture the consumer mass market in China and internationally. Gartner analysts say that nine out of the top 10 smartphone vendors have entered the wearables market to date or are about to ship a first product, while a year ago only two vendors were in that space.


The Pursuit of Excellence is a Choice
Many organizations struggle with a number of very common issues. They lack cogent direction. Strategies are incomplete or missing in action or in some state of flux. Employees are unengaged and unaware of how their efforts and functional or vocational goals plug into the bigger picture. Priorities are fuzzy and ever-shifting. Customers aren’t particularly loyal or happy. There’s cross-border conflict between functions where there should be cooperation and collaboration. Metrics are fuzzy and mostly rear-view mirror looking. And finally, there’s an incredible amount of waste and inefficiency due to poor and undocumented processes.



Quote for the day:

"At the heart of great leadership is a curious mind, heart, and spirit." -- Chip Conley