Showing posts with label NaaS. Show all posts
Showing posts with label NaaS. Show all posts

Daily Tech Digest - August 15, 2024

Better Cloud Security Means Getting Back to Basics

Securing the cloud isn’t rocket science – it just requires a little extra knowledge. While it’s tempting to think of the cloud as a new frontier in computing (and, in some ways, it is), cloud security solutions have been around for almost as long as the cloud itself. The trouble is that most organizations don’t know how they should think about cloud security in the first place. ... A good starting point for many organizations is simply evaluating how effective their existing cloud security is. It isn’t enough to implement security solutions – even if they’re the right solutions. It’s also important to know that they are functioning as intended. Today’s organizations have more testing and validation tools at their fingertips than ever, and conducting breach and attack simulation, automated red teaming, and other exercises can lay bare where vulnerabilities and inefficiencies exist. Recent testing reveals that the basic security suites offered by the leading cloud providers are not enough to detect all – or even most – attack activity, highlighting the areas where organizations need to implement new protections and providing insight into what additional solutions may be necessary.


Cloud Waste Management: How to Optimize Your Cloud Resources

To better understand cloud waste, we need to understand the iron triangle of project management, which states that there is always a tradeoff between speed, quality, and cost. If you want to deliver a quality product/feature quickly, it will cost you more. Businesses are always trying to innovate and deliver continuous value to their customers. Often, it means putting pressure on the delivery teams to improve time to market. As an effect, there is the over provisioned capacity of resources; multiple resources that were provisioned to validate theory or concept were not deleted as the teams moved on either delivering the accepted solutions or to another project assignment. This is one of the major factors of cloud waste. ... Since you pay for each resource provisioned in the cloud, managing cloud waste becomes critical, as it directly impacts your business’s bottom line. CFOs and finance teams struggle to manage the forecast and budget for cloud spend as they never know what capacity is wasted in the cloud, and there is no good way to review it regularly.


Campus NaaS: Transforming Enterprise Networking

The flexibility of the NaaS model allows businesses to experiment with new technologies and use cases without the risk of large, upfront investments in hardware and expertise. This is particularly valuable as emerging technologies like AI and edge computing become more prevalent in enterprise environments. ... The potential benefits of Campus NaaS are significant and organizations must carefully evaluate potential NaaS providers. Standards-based solutions ensure interoperability between different NaaS components and service providers allowing businesses to seamlessly integrate NaaS solutions from various vendors without compatibility issues. Security capabilities, and long-term roadmaps should also be considered. Campus NaaS is poised to play a pivotal role in shaping the future of enterprise networking, enabling businesses to build the agile, high-performance foundations needed to thrive in an increasingly digital world. As the technology continues to evolve and mature, we can expect to see even more innovative use cases and deployment models emerge, further cementing the role of Campus NaaS as a cornerstone of modern enterprise IT strategy. 


Applying Security Everywhere – How to Prioritise Risks Across Multiple Platforms

For IT architects and security teams, the joint challenge here is actually one of the oldest ones in IT – knowing what you have. Getting an accurate inventory of all your software assets and components is a hard task on one platform, let alone across internal datacenter deployments, web applications, public cloud implementations and modern cloud-native applications. Keeping this inventory up to date is harder still, given how much change will take place over time across the entire application estate. Alongside this inventory, there are other factors to consider. Not all applications are created equal, and an issue in an internal web application that is used by a few people every month will not be as important as a critical vulnerability in a business application that is responsible for generating revenue every day. Yet both of these applications may have a flaw, and alerts sent to request fixes or updates get made. Internal processes and workflows will also affect the situation. While security teams might spot potential issues in an application or software component like an API, they will not be responsible for making the change themselves. 


Attempting Digital Transformation? Try Embracing Team Resistance

Resistance to transformation has several causes, Dewal says. First off, many logistics professionals already feel slammed, and don’t welcome the idea of new work. “It can feel like an add-on, creating competing priorities,” she says. Then there’s a fear-based resistance to the perceived complexity of the new tasks involved. “It’s too complex and we don’t have the right skill sets to be able to execute on them,” she says, describing this mindset. “Collectively, let’s call it the fear of failure, of getting it wrong.” Finally, there’s the familiar human tendency to prefer sticking with the status quo. “That can hide variations underneath it,” Dewal says. “Sometimes the team is not even sure why the transformation is needed. Sometimes, they feel like they’re not getting enough support in terms of executing it.” Further, the survey dug into two types of resistance – productive and unproductive. Productive resistance is the type that comes from on-the-ground knowledge and expertise that relates to the implementation itself. ... Leaders who avoided a top-down, change-or-die approach, and instead focused on communication and collaboration, had much better chance of success, the survey found.


How leading CISOs build business-critical cyber cultures

In information security, where risk is widespread, attacks are becoming increasingly sophisticated, and so much is on the line, one defining attributes of successful CISOs is their courage. The good news is, courage is a muscle that can be developed just like any other. It’s also a mindset. The CISOs on this panel described various internal motivators that keep them in the game, resilient, and adaptable, even in the face of daunting challenges. They made it clear that it’s a lot easier to be courageous when you’re driven by a love for what you do and maintain a clear line of sight to the impact you’re making. One of the common threads is their focus on “moments of truth,” those points of contact between cybersecurity and various stakeholders. Leaders who are intentional about this find they’re better able to see around corners and show up more strategically as business enablers. Rodgers says it’s a lesson she learned in the early days of her career when she worked on a help desk. Fielding complaints all day takes its own kind of courage. “But the beauty of it is, you get to know people and how they work,” she says. “I got to a point where I could anticipate what they were going to want, so I started proactively providing those things. ...”


How passkeys eliminate password management headaches

There are several usability challenges that could affect the adoption of passkeys. Key among them is compatibility, as passkeys may not work on outdated operating systems or older devices. Bypassing the technical roadblocks, user resistance is often the reason for a failure to adopt new technology such as passkeys. After all, users have been leveraging passwords since the early 1960’s. Emphasizing training and education on how to provision passkeys is essential to adoption, as registration could be challenging for non-tech-savvy users. It may be best to start with small groups or departments to address unique challenges within the organization’s diverse culture and educate users. Organizations are starting to adopt passkeys to enhance security and optimize productivity, and as with any new implementation, there will be challenges. Passkey implementation should begin with top-level leadership as early adopters, which will help employees buy in and ensure a smooth transition from traditional passwords to passkeys. Upfront investment in planning, and creating robust policies and processes, will be critical to the implementation’s success.


Six Common Digital Transformation Challenges

Aligned leadership helps in allocating resources efficiently, prioritizing initiatives that drive the most value, and mitigating risks associated with digital transformation efforts. Clear, consistent communication from aligned leaders also builds trust and motivates teams to adapt to new paradigms. Ultimately, leadership alignment serves as the backbone of successful digital transformation by driving coherent strategies and fostering an environment conducive to innovation and agility. Effective communication is paramount, with transparent discussions about goals, challenges, and expected outcomes. Additionally, establishing cross-functional teams can help integrate diverse perspectives, facilitating smoother transitions during technology adoption. By embedding these practices into the organizational fabric, leaders can drive successful digital transformation while maintaining strategic coherence. Addressing resistance to change and fostering a digital mindset among leaders is pivotal in navigating this digital transformation challenge. Resistance often stems from a fear of the unknown and a reluctance to abandon established processes. 


Why Can’t Automation Eliminate Configuration Errors?

The emergence of configuration intelligence changes the game in several ways. First, it means that anyone tasked with maintaining configurations can save a lot of time and trouble that used to involve manual, tedious but cognitively intense tasks like reading through YAML manifests or config files to identify tiny errors. Yes, some tools existed to do this before, but they mostly functioned more like “linters,” spotting obvious syntax errors. By simplifying the process, time to manually maintain configs is drastically reduced. ... The lack of detailed expertise has been a traditional problem of IaC products, which struggle to keep up with configuration recommendations across the dozens of software applications and infrastructure components they manage and automate. The lack of detailed configuration expertise also creates a cadre of in-house experts, who become key sources of institutional memory — but also major risks. When your load-balancing guru walks out the door to take another job, then everything they know that’s not clearly documented goes out the door too.


Enterprise spending on cloud services keeps accelerating

“Enterprises are also choosing to house an ever-growing proportion of their data center gear in colocation facilities, further reducing the need for on-premise data center capacity. The rise of generative AI technology and services will only exacerbate those trends over the next few years, as hyperscale operators are better positioned to run AI operations than most enterprises,” he wrote. Dinsdale told me the workloads staying on-premises tend to be workloads that are either very complex and cannot easily be transitioned, are focused on highly sensitive data, are governed or influenced by regulatory issues, or are highly predictable and can be managed economically on premise. Enterprises worldwide are spending around $100 billion per year on their own data center IT hardware and associated infrastructure software, which has held flat for the last several years/ By comparison, enterprises are now spending $80 billion per quarter on cloud services; not to mention another $65 billion per quarter on SaaS. “And those cloud and SaaS numbers are growing like gangbusters,” he said.



Quote for the day:

"The whole point of getting things done is knowing what to leave undone." -- Lady Stella Reading

Daily Tech Digest - Aug 03, 2024

Solving the tech debt problem while staying competitive and secure

Technical debt often stems from the costs of running and maintaining legacy technology services, especially older applications. It typically arises when organizations make short-term sacrifices or use quick fixes to address immediate needs without ever returning to resolve those temporary solutions. For CIOs, balancing technical debt with other strategic priorities is a constant challenge. They must decide whether to invest resources in high-profile areas like AI and security or to prioritize reducing technical debt. ... CIOs should invest in robust cybersecurity measures, including advanced threat detection, response capabilities, and employee training. Maintaining software updates and implementing multifactor authentication (MFA) and encryption will further strengthen an organization’s defenses. However, technical debt can significantly undermine these cybersecurity efforts. Legacy systems and outdated software can have vulnerabilities waiting to be exploited. Additionally, technical debt is often represented by multiple, disparate tools acquired over time, which can hinder the implementation of a cohesive security strategy and increase cybersecurity risk.


How to Create a Data-Driven Culture for Your Business

With businesses collecting more data than ever, for data analysts it can be more like scrounging through the bins than panning for gold. “Hiring data scientists is outside the reach of most organizations but that doesn't mean you can’t use the expertise of an AI agent,” Callens says. Once a business has a handle on which metrics really matter, the rest falls into place, organizations can define objectives and then optimize data sources. As the quality of the data improves the decisions are better informed and the outcomes can be monitored more effectively. Rather than each decision acting in isolation it becomes a positive feedback loop where data and decisions are inextricably linked: At that point the organization is truly data driven. Subramanian explains that changing the culture to become more data-driven requires top-down focus. When making decisions stakeholders should be asked to provide data justification for their choices and managers should be asked to track and report on data metrics in their organizations. “Have you established tracking of historical data metrics and some trend analysis?” she says. “Prioritizing data in decision making will help drive a more data-driven culture.”


How Prompt Engineering Can Support Successful AI Projects

Central to the technology is the concept of foundation models, which are rapidly broadening the functionality of AI. While earlier AI platforms were trained on specific data sets to produce a focused but limited output, the new approach throws the doors wide open. In simple — and somewhat unsettling — terms, a foundation model can learn new tricks from unrelated data. “What makes these new systems foundation models is that they, as the name suggests, can be the foundation for many applications of the AI model,” says IBM. “Using self-supervised learning and transfer learning, the model can apply information it’s learnt about one situation to another.” Given the massive amounts of data fed into AI models, it isn’t surprising that they need guidance to produce usable output. ... AI models benefit from clear parameters. One of the most basic is length. OpenAI offers some advice: “The targeted output length can be specified in terms of the count of words, sentences, paragraphs, bullet points, etc. Note however that instructing the model to generate a specific number of words does not work with high precision. The model can more reliably generate outputs with a specific number of paragraphs or bullet points.”


Effective Strategies To Strengthen Your API Security

To secure your organisation, you have to figure out where your APIs are, who’s using them and how they are being accessed. This information is important as API deployment increases your organisation’s attack surface making it more vulnerable to threats. The more exposed they are, the greater the chance a sneaky attacker might find a vulnerable spot in your system. Once you’ve pinpointed your APIs and have full visibility of potential points of access, you can start to include them in your vulnerability management processes. By proactively identifying vulnerabilities, you can take immediate action against potential threats. Skipping this step is like leaving the front door wide open. APIs give businesses the power to automate the process and boost operational efficiency. But here’s the thing: with great convenience comes potential vulnerabilities that malicious actors could exploit. If your APIs are internet-facing, then it’s important to put in place rate-limiting to control requests and enforce authentication for every API interaction. This helps take the guesswork out of who gets access to what data through your APIs. Another key measure is using the cryptographic signing of requests.


The Time is Now for Network-as-a-Service (NaaS)

As the world’s networking infrastructure has evolved, there is now far more private backbone bandwidth available. Like all cloud solutions, NaaS also benefits from significant ongoing price/performance improvements in commercial hardware. Combined with the growing number of carrier-neutral colocation facilities, NaaS providers simply have many more building blocks to assemble reliable, affordable, any-to-any connectivity for practically any location. The biggest changes derive from the advanced networking and security approaches that today’s NaaS solutions employ. Modern NaaS solutions fully disaggregate control and data planes, hosting control functions in the cloud. As a result, they benefit from practically unlimited (and inexpensive) cloud computing capacity to keep costs low, even as they maintain privacy and guaranteed performance. Even more importantly, the most sophisticated NaaS providers use novel metadata-based routing techniques and maintain end-to-end encryption. These providers have no visibility into enterprise traffic; all encryption/decryption happens only under the business’ direct control.


Criticality in Data Stream Processing and a Few Effective Approaches

With the advancement of stream processing engines like Apache Flink, Spark, etc., we can aggregate and process data streams in real time, as they handle low-latency data ingestion while supporting fault tolerance and data processing at scale. Finally, we can ingest the processed data into streaming databases like Apache Druid, RisingWave, and Apache Pinot for querying and analysis. Additionally, we can integrate visualization tools like Grafana, Superset, etc., for dashboards, graphs, and more. This is the overall high-level data streaming processing life cycle to derive business value and enhance decision-making capabilities from streams of data. Even with its strength and speed, stream processing has drawbacks of its own. A couple of them from a bird's eye view are confirming data consistency, scalability, maintaining fault-tolerance, managing event ordering, etc. Even though we have event/data stream ingestion frameworks like Kafka, processing engines like Spark, Flink, etc, and streaming databases like Druid, RisingWave, etc., we encounter a few other challenges if we drill down more


Understanding the Impact of AI on Cloud Spending and How to Harness AI for Enhanced Cloud Efficiency

The real magic happens when AI unlocks advanced capabilities in cloud services. By crunching real-time data, AI transforms how businesses operate, making them more agile and strategic in their approaches. Businesses can gain better scalability, run operations more efficiently, and make smarter, data-driven decisions – all thanks to AI. One of the biggest advantages of AI in the cloud is how it helps companies scale up smoothly. By using AI-driven solutions, businesses can predict future demands and optimise resource allocation accordingly. This means they can handle increased workloads without massive infrastructure overhauls, which is crucial for staying nimble and competitive. Scaling AI in cloud computing isn’t without its challenges, though. It requires strategic approaches like getting leadership buy-in, establishing clear ROI metrics, and using responsible AI algorithms. These steps ensure that AI integration not only scales operations but also does so efficiently and with minimal disruption. AI algorithms continuously monitor workload patterns and can make recommendations on adjusting resource allocations accordingly.


Blockchain Technology and Modern Banking Systems

“Zumo's innovative approach to integrating digital assets into traditional banking systems leverages APIs to simplify the process.” As Nick Jones explains, its Crypto Invest solution offers a digital asset custody and exchange service that can be seamlessly incorporated into a bank's existing IT infrastructure. “This provides consumer-facing retail banks with a compliance-focused route to offer their customers the option to invest in digital assets,” says Nick. By doing so, banks can generate new revenue streams, enabling customers to buy, hold and sell crypto within the familiar confines of their own banking platform. Recognising the regulatory and operational challenges faced by banks, Nick Jones believes in developing a sustainable and long-term approach, with a focus on delivering the necessary infrastructure. For banks to confidently integrate digital asset propositions into their business models, they must address the financial, operational and environmental sustainability of the project. Similarly, Kurt Wuckert highlights the feasibility of a hybrid approach for banks, where blockchain solutions are introduced gradually alongside existing systems. 


The transformation fallacy

Describing the migration process so far, Jordaan says that they started with some of the very critical systems. “One of which was the e-commerce system that runs 50 percent of our revenue,” he says. “That was significant, and provided scalability, because we could add more countries into it, and there are events such as airlines that cancel flights and so our customers would suddenly be looking for bookings.” After that, it was a long-running program of lifting and shifting workloads depending on their priority. The remaining data centers are either “just really complicated” to decommission, or are in the process of being shut down. By the end of next year, Jordaan expects TUI to have just one or two data centers. One of the more unique areas of TUI’s business from an IT perspective is that of the cruise ships. “Cruise ships actually have a whole data center on board,” Jordaan says. “It has completely separate networks for the onboard systems, navigation systems, and everything else, because you're in the middle of the sea. You need all the compute, storage, and networks to run from a data center.” These systems are being transformed, too. Ships are deploying satellite connectivity to bring greater Internet connectivity on board. 


AI and Design Thinking: The Dynamic Duo of Product Development

When designing products that incorporate generative AI, it may feel that you are tipping in the direction of being too technology-focused. You might be tempted to forego human intuition in order to develop products that embrace AI’s innovation. Or, you may have a more difficult time discerning what is meant to be human and what is meant to be purely technical, because AI is such a new and dynamic field that changes almost weekly. The human/machine duality is precisely why combining human-centric Design Thinking with the power of Generative AI is so effective for product development. Design Thinking isn’t merely a method; it’s a mindset focusing on user needs, iterative learning, and cross-functional teamwork—all of which are essential for pioneering AI-driven products. ... One might say that focusing on a solution to a problem, instead of the problem itself, is quite an empathetic way to approach a problem. Empathy, a cornerstone of Design Thinking, allows developers to understand their users deeply. ... While AI is a powerful tool, it’s crucial to maintain ethical standards and monitor for biases. Generative AI should not be considered a replacement for human ethics and critical thinking. Instead, use it as a collaborative component for enhancing creativity and efficiency.



Quote for the day:

"The litmus test for our success as Leaders is not how many people we are leading, but how many we are transforming into leaders" -- Kayode Fayemi

Daily Tech Digest - February 28, 2023

Does the Future of Work include Network as a Service (NaaS)?

NaaS enables companies to implement a network infrastructure that will evolve with time, providing the flexibility to adapt to business needs as time evolves. With NaaS, companies can focus on business outcomes and service level objectives for their network and the accessibility required for their community of workers, partners, and customers. NaaS eliminates organizations having to worry about keeping up with the pace of technology change by relying on the strength and expertise of their implementation partner. NaaS eliminates large upfront capital expenditure investments that often go into new network infrastructure design, planning, and implementation with a monthly subscription-based or flexible consumption model, alleviating the financial impact on rebuilding a new workplace environment. NaaS enables more flexibility by not tying the organization down to specific hardware or capital investments that may eventually become obsolete.


How Technical Debt Hampers Modernization Efforts for Organizations

“When you develop an application, you take certain shortcuts for which you're going to have to pay the price back later on,” explains Olivier Gaudin, cofounder and CEO of SonarSource, which develops open-source software for continuous code quality and security. “You accept that your code is not perfect. You know that it will have a certain cost when you come back to it later. It will be a bit more difficult to read, to maintain or to change.” ... Experts note the patience and long-term strategy required to overcome technical debt. “It’s a matter of focusing on longer-term strategy over short-term financial goals,” Orlandini says. “Unfortunately for Southwest, the issues were well-known. However, the business as a whole did not have the will or motivation to invest in fixing it until it was too late. They are an extreme example but serve as a very valid case in point of what can happen if you do not understand the issues and the ultimate repercussions of not investing to avoid a meltdown, in whatever form that would take for each organization.”


Just how big is this new generative AI? Think internet-level disruption

While resources and information availability increased by an unprecedented degree, so too did misinformation, scams, and criminal activity. One of the biggest problems with ChatGPT is that it presents completely wrong information as eloquently and confidently as it presents accurate information. Unless requested, it doesn't provide sources or cite where that information came from. Because it aggregates a tremendous amount of free-form information, it's often impossible to trace how it comes by its knowledge and assertions. This makes it ripe for corruption and gaming. At some point, AI designers will need to open their systems to the broader internet. When they do, oh boy, it's going to be rough. Today, there are entire industries dedicated to manipulating Google search results. I'm often required by clients to put my articles through software applications that weigh each word and phrase against how much Google oomph it produces, and then I'm asked to change what I write to appeal more to the Google algorithms.


Is blockchain really secure? Here are four pressing cyber threats you must consider

Blockchains use consensus protocols to reach agreement among participants when adding a new block. Since there is no central authority, consensus protocol vulnerabilities threaten to control a blockchain network and dictate its consensus decisions from various attack vectors, such as the majority (51%) and selfish mining attacks. ... The second threat is related to the exposure of sensitive and private data. Blockchains are transparent by design, and participants may share data that attackers can use to infer confidential or sensitive information. As a result, organizations must carefully evaluate their blockchain usage to ensure that only permitted data is shared without exposing any private or sensitive information. ... Attackers may compromise private keys to control participants’ accounts and associated assets by using classical information technology methods, such as phishing and dictionary attacks, or by exploiting vulnerabilities in blockchain clients’ software.


Behaviors To Avoid When Practicing Pair Programming

Despite its popularity, pair programming seems to be a methodology that is not wildly adopted by the industry. When it is, it might vary on what "pair" and "programming" means given a specific context. Sometimes pair programming is used in specific moments throughout the day of practitioners, as reported by Lauren Peate on the podcast Software Engineering Unlocked hosted by Michaela Greiler to fulfill specific tasks. But, in the XP, pair programming is the default approach to developing all the aspects of the software. Due to the variation and interpretation of what pair programming is, companies that adopt it might face some miss conceptions of how to practice it. Often, this is the root cause of having a poor experience while pairing.Lack of soft (social) skills ... The driver and navigator is the style that requires the pair to focus on a single problem at once. Therefore, the navigator is the one that should give support and question the driver's decisions to keep both in sync. When it does not happen, the collaboration session might suffer from a lack of interaction between the pair. 


When it comes to network innovation, we must protect the data ‘pipes’

We must conclude that any encrypted information collected by foreign intelligence services will eventually be cracked through sufficient compute power and time. This is one reason why super computers are part of the race for information dominance. At the level of supercomputers, the amount of compute is truly calculated in cost to build and cost to operate. If you do not have access to cutting edge chips, just increase the number of compute chips, central processing unit or graphics processing unit, or some other compute unit like an AI accelerator. It will cost more to make and cost more electricity to operate, but the amount of compute will be available to the government or corporation that invested in the system. Without a true “zero trust” scheme, any compromise of any node on any network becomes a pivot point for further attacks. The problem with “zero trust” is that to be effective, you need a mature network model that can be secured, not a “growing, organic network” that is adapting rapidly to meet the needs of the user.


Unstructured data and the storage it needs

As we’ve seen, unstructured data is more or less defined by the fact it is not created by use of a database. It may be the case that more structure is applied to unstructured data later in its life, but then it becomes something else. ... It’s quite possible to build adequately performing file and object storage on-site using spinning disk. At the capacities needed, HDD is often the most economic option.But advances in flash manufacturing have led to high-capacity solid state storage becoming available, and storage array makers have started to use it in file and object storage-capable hardware. This is QLC – quad-level cell – flash. This packs in four levels of binary switches to flash cells to provide higher storage density and so lower cost per GB than any other flash commercially usable currently. The trade-offs that come with QLC, however, are that flash lifetime can be compromised, so it’s better suited to large-capacity, less frequently accessed data. But the speed of flash is particularly well-suited to unstructured use cases, such as in analytics where rapid processing and therefore I/O is needed


The Cybersecurity Hype Cycle of ChatGPT and Synthetic Media

Historically, spearphishing messages have been partially or entirely crafted by people. However, synthetic chat makes it possible to automate this process – and highly advanced synthetic chat, like ChatGPT, makes these messages seem just as, or more convincing, than a human-written message. It also opens the door for automated, interactive malicious communications. With this in mind, threat actors can quickly and cheaply massify high-cost and highly effective approaches like spearphishing. These capabilities could be used to support cybercrime, nation-state operations and more. Advances like ChatGPT may also have a meaningful impact on information operations, which have come to the forefront due to foreign influence in recent US presidential elections. Technologies such as ChatGPT can generate lengthy, realistic content supporting divisive narratives, which could help scale up information operations.


How to de-risk your digital ecosystem

In short, in any de-risking framework, one must assume that the largest source of cyberthreats comes not from someone breaking in, but rather from a door left open for an uninvited guest. Organizations must adapt their mindset, their processes, and their resources accordingly. ... In many organizations, the responsibility for closing risk gaps falls to several leaders, but not to a single point of authority. The failure is understandable as digital ecosystems touch multiple dimensions of an enterprise. But then responsibility for the total risk environment and de-risking is shared — though not necessarily met. A lack of accountability results in a lack of power to act and set de-risking as a priority within the organization. ... Without understanding the context of the business, understanding and remediating risk is difficult to do effectively. For example, an outside vendor can be a potential source of risk but also plays a critical and central role in the business. Resolving and mitigating the issue may require special handling and attention.


Closing the Cybersecurity Talent Gap

Cybersecurity is often viewed as just another technical talent field, yet candidates are expected to possess a wide range of rapidly evolving knowledge and skills. When filling staffing gaps, leaders should examine the skill sets that are missing from their current team, such as creative problem solving, stakeholder communications, buy-in development, and change enablement. “Look for candidates who will help balance out existing team skills as opposed to individuals who match a specific technical qualification,” Glair says. Before hiring can begin, it's necessary to attract suitable candidates. Initial search steps should include website updates and social media posts, Glair says. He also suggests creating an internal “cybersecurity academy” that will build talent from within the organization. “This should include the technical, process, communications, and leadership skills needed to address today’s cybersecurity challenges,” Glair notes. Burnet recommends sponsoring a “sourcing jam.” “That means getting recruiters and/or hiring managers in a room together ... to trawl through their networks and get them to personally reach out.”



Quote for the day:

"Leaders are the ones who keep faith with the past, keep step with the present, and keep the promise to posterity." -- Harold J. Seymour

Daily Tech Digest - June 20, 2022

Metaverse: Momentum is building, but companies are still staying cautious

"It's too early to understand whether the metaverse is going to be a big thing or whether it is just another buzzword and marketing exercise," he says. "But I suspect it's going to have enough momentum behind it that it will become a thing that we will want to be interested in." That seems to be the general consensus among other industry observers, too. While the cash invested by Big Tech means the metaverse is likely to become successful eventually, no one should be expecting to collaborate with colleagues and friends in a rich virtual space tomorrow. Distinguished Gartner analyst Mark Raskino suggests that the challenge of filling the human field of view with a realistic and immersive image space is an incredibly hard problem to solve. "I do believe that one day business will commonly be conducted in a fully immersive 3D visual metaverse. But it will not happen in the 2020s. It probably won't happen in the 2030s." In fact, such is the slow pace of development that some businesses believe there's no big requirement to rush headfirst into metaverse pilots.


Are you ready to automate continuous deployment in CI/CD?

Continuous deployment, as a principle, can be applied to many applications and even in the most regulated industries. Tim Lucas, co-founder and co-CEO of Buildkite, says, “Continuous deployment can be adopted per project, and the best orgs set goals for moving as many projects as possible to this model. Even in finance and regulated industries, the majority of projects can adopt this model. We even see self-driving car companies doing continuous deployment.” While devops teams can implement continuous deployment in many projects, the question is, where does it offer a strong business case and significant technical advantages? Projects deploying features and fixes frequently, and where a modernized architecture simplifies the automations, are the more promising to transition to continuous deployment. Lucas shares some of the prerequisites that should be part of the software development process before moving to a continuous deployment model. He says, “Continuous deployment is true agility, the fastest way from code change to production. It requires always keeping the main branch in a shippable state, automating tests, and high-quality tooling you can trust and have confidence in.”


Tech sector sustainability efforts need full ecosystem approach

On redefining growth, Ryan Shanks, head of sustainability for Europe at Accenture, noted that while innovation in many areas is done one company at a time and then used for competitive advantage, the opposite is true for climate change-related innovation. “What I’m seeing in our portfolio work at the moment, if it relates to the circular economy or the energy transition, etc, is none of our individual clients can actually do anything on their own,” he said. “They are hugely reliant on an ecosystem – policy folks, regulators, entrepreneurs, not-for-profits – of people coming together.” Shanks said that to achieve innovation at scale, the first thing organisations should do is adopt an inter-disciplinary approach from the ideation stage. “I mean the technologists, the consumer folks, the business model people and finally, increasingly for us, social scientists and ethicists, working side by side,” he said. “Now on a day-to-day basis, they’ll tell me that working together slows each of them down – the creatives want to work on their own, and the tech want to work on their own – but I’ll say it catches up in the long run because it speeds things up to get to scale.”


Redefining NaaS: It’s the internet

An internet NaaS will require either an SD-WAN, which has to be managed, or some added security layer (maybe SASE or a combination of encryption and firewall tools) to secure the applications themselves. Enterprises that use the internet to connect with customers and partners may find it relatively easy to add employee access via the internet, using access-security tools and encryption alone. That approach should be explored, but SD-WAN is the closest to traditional VPN technology, and that makes it possible to gradually transition from a traditional VPN into an internet NaaS via SD-WAN. You can get SD-WAN technology as a product set or as a managed service. If you really want to avoid capital purchases, the latter option is the way to go. The price of an Internet SD-WAN managed service will depend on the usual factors like number of sites and the amount of management handholding you can expect, and also on just where the sites are. There’s a lot of variation, but enterprises that have switched to an internet NaaS tell me the total cost of ownership is far, far, lower than a managed IP VPN. 


The future of the creator economy in a Web3 world

Creator-owned content is the first iteration of the Web3 creator economy. On current social platforms such as Instagram and TikTok, the company behind the platform owns the content that creators produce. Web3 will enable creators to not only own their content on existing social platforms, but also own a part of the platform they produce and distribute content on. Content can begin to be creator-owned and platform-agnostic through the use of NFTs, which act as proof of ownership and validate the content’s authenticity. ... Creators will also play a key role in the metaverse. In addition to participating in it, creators can develop parts of the metaverse with either no-code tools or technical background. This has already started to take shape in existing gaming metaverses, most notably Roblox. On Roblox, anyone can create video games and monetize them directly on the platform. In 2020 alone, creators earned $329 million through Roblox alone. “Metaverse creators” will likely grow to become an active and profitable vertical of the creator economy in the years to come.


Zero Trust, SASE and SSE: foundational concepts for your next-generation network

Zero Trust Network Access is the technology that makes it possible to implement a Zero Trust security model by requiring strict verification for every user and every device before authorizing them to access internal resources. Compared to traditional virtual private networks (VPNs), which grant access to an entire local network at once, ZTNA only grants access to the specific application requested and denies access to applications and data by default. ... Browser isolation is a technology that keeps browsing activity secure by separating the process of loading webpages from the user devices displaying the webpages. This way, potentially malicious webpage code does not run on a user’s device, preventing malware infections and other cyber attacks from impacting both user devices and internal networks. RBI works together with other secure access functions - for example, security teams can configure Secure Web Gateway policies to automatically isolate traffic to known or potentially suspicious websites.


The Metaverse And Web3 Creating Value In The Future Digital Economy

The Metaverse is not to be confused with Web3, which is the third stage of development of the World Wide Web. The Metaverse refers to a virtual reality-based parallel internet world where users can interact with each other and digital objects in a 3D space. It's an extension of the internet into a three-dimensional virtual world. It is an immersive, interactive, and social platform where people can create avatars to represent themselves, buy and sell virtual property, and interact with other users in real-time. Web3 is more about blockchain technology and concepts, including digital identity, smart contracts, and decentralized applications (dApps). ... Many believe the Metaverse is a speculative scheme of the future, but it's about connecting the digital world with the physical world. It brings people together in a shared, virtual space to interact and create. Entrup continues, “Having personally witnessed the transition from a no Internet world to a globally connected Internet world, I find it funny to hear the same negative comments being made about the metaverse.


The Great Resignation continues. There's an obvious fix, but many bosses aren't interested

The problem for many is that the traditional approach to filling skills gaps has become less and less effective. Every company on the planet seems to be on a mission to build a superstar tech team, and that means developers, cloud specialists and cybersecurity professionals are being snapped up at a rate that means it's almost impossible for hiring managers to keep up. ... "Skills help people stay," the report reads. "They help them thrive in their roles. And they enable you to deliver on your objectives." The problem for employees – and by extension, employers – is that other demands often prevent employees from upskilling. Pluralsight's report found that 61% of tech workers felt too busy to dedicate time to upskilling – the biggest barrier to development identified by survey respondents. This could be seen as another effect of the skills shortage: if teams are short-staffed, their resources are already going to be stretched trying to cover the day-to-day running of the department. On top of this, companies often claim they lack the budget and resources to properly invest in skills.


How to create a cloud center of excellence

The purpose of a CCoE is to provide an organizational focus on cloud initiatives within the company, and to bring order and structure to those initiatives. For a CCoE to be effective, your organization as a whole must buy into cloud computing and want to pursue it. Corporate management must be well-informed and supportive of the endeavor. A CCoE will not—cannot—be effective without company management support. It is not a tool to convince upper management of the effectiveness of the cloud. If you are in a position where you are trying to convince management of the value of the cloud, you should not look at a CCoE as the means to accomplish that. Once your leadership is convinced that the company needs to move forward with a cloud strategy, the CCoE can help execute that strategy. A CCoE is most effective when management makes use of the structure as a tool to bring the rest of the organization along and turn it into a cloud-centric organization. The CCoE is the implementation vessel for management’s wishes.


5G technology disruption – 4 sectors ripe for disruption

Although many banking apps can work perfectly well using 4G, they lose effectiveness when internet connectivity is pushed by too many people trying to use the network simultaneously. 5G, by offering speeds which can theoretically offer speeds which are 100 times faster, may offer an advantage which seems quite prosaic — they provide the service that people expect from 4G but often don’t get. But the above benefit of 5G is hardly disruptive. To imagine how 5G might be disruptive, consider how the internet and smartphones have disrupted financial services. We have seen new banking services from the likes of Monzo and Revolut disrupt the existing banking industry. 5G will create new opportunities for sophisticated real-time financial services, such as credit checks when buying big-ticket items. 5G may also create superior security and anti-fraud technologies. For insurance, the opportunity created by data may be especially important, especially data related to mobile activities. The opportunity for augmented and virtual reality may be where the true disruption lies — we may even see a new type of banking model emerge in which we see the best of two worlds — traditional branch banking and online.



Quote for the day:

"Leaders must always question the status quo, be aware of the ever-changing environment and be willing to act decisively." -- Mike Finley

October 13, 2014

GE: We’re going all-in with the public cloud
I’m not a big fan of using the word “internal” cloud, because internal is really, in my opinion, well-orchestrated virtualization that people are calling cloud for marketing purposes. But as an operating model, yes. We have internal platforms that drive those same cloudlike behaviors. We have what EMC or one of those guys would call a private cloud when they're selling you one. But our vision is: We think that that’s a stopgap. We think it’s a temporary solution. Frankly, we think even the hybrid cloud is really a temporary solution. I think there could be some good debates over how long you mean when you say "temporary."


Kaiser Permanente: The Rise of the 21st Century Health System
Technology helps us form those tight bonds with our members that are essential to promoting good health. Several years ago we set up an online service, My Health Manager, to enable members to connect with their healthcare providers and health information. More than 4.4 million members are registered. Last year, members used My Health Manager to view more than 26 million test results, send more than 11 million emails to care providers, refill more than 10.8 million subscriptions and schedule more than 2.8 million appointments. Members can access the system via our KP App on mobile devices. We call it “care anywhere.”


Connected Cars Vs. Cybercrime: Tough Fight
As the NHTSA notes in its report, cars built after 2009 have more than 60 independent electronic control units (ECUs) in them for controlling everything from heating and entertainment systems to steering, braking, and engine-monitoring functions. Each of these ECUs is accessible either through wired interfaces such as USBs and SD cards or wireless interfaces including Bluetooth, WiFi or near-field communications. Autonomous vehicles like Google's futuristic concept cars are likely to pack a lot more of such components.


Cybercrime fighters to target kingpins, says top EU cyber cop
Specialists in the virtual underground economy are developing products and services for use by other cyber criminals, the Internet Organised Crime Threat Assessment (IOCTA) report said. The report’s authors believe this crime-as-a-service business model drives innovation and sophistication, and provides access to a wide range of services that facilitate almost any type of cyber crime. As a result, the barriers to entry for cyber crime are being lowered to allow those lacking technical expertise - including traditional organised crime groups - to conduct cyber crime.


Buying enterprise mobility management: How important is independence?
Independence is one of the big themes for EMM players like MobileIron and Good Technology. These companies are focused on mobile management and that's all they do. In the EMM space, independence and focus are hard to find. Among big players EMM is either part of a bundle — VMware's AirWatch and Citrix Xen Mobile are likely to have an attempt at desktop virtualization cross selling — or neutrality with suites have to be proven over time. Will Microsoft really want to manage iOS and Android as well as it does Windows? BlackBerry manages iOS and Android devices too. But since both of those vendors have their own platforms the burden of proof is on them to show they're neutral.


IT industry group slams burdens imposed by proposed EU privacy policy
"There remain a number of weaknesses in the text that will result in unnecessary burdens on data controllers and processors, without any improvement in privacy protection," the group said. The amendment approved by the justice ministers also requires that businesses carry out an impact assessment of the risks associated with holding data, a process DigitalEurope criticized as complex. DigitalEurope also said the rules on sub-contracting data processing work were overly restrictive. Rules for employing data protection officers who are responsible for ensuring compliance with the law are "unwieldy and inflexible," the group said.


Network as a service: The core of cloud connectivity
NaaS, for a cloud network builder, is an abstract model of a network service that can be at either Layer 2 (Ethernet, VLAN) or Layer 3 (IP, VPN) in the OSI model. A cloud user defines the kind of NaaS that their cloud connectivity requires, and then uses public or private tools to build that NaaS. NaaS can define how users access cloud components, and also how the components themselves are connected in a private or hybrid cloud. The best-known example of NaaS in the public cloud space is Amazon's Elastic IP address service. This service lets any cloud host in EC2, wherever it is located, be represented by a constant IP address. The Elastic IP NaaS makes the cloud look like a single host. This is an example of an access-oriented NaaS application.


Database revolutions, reloaded
HP has pointed to three major innovation waves in database history. Starting out with mainframes, we know that these formed the bedrock of the first age of databases — and these were populated and popularised (and so of course refined) by government, the financial services industry, telecommunications — and here’s the interesting thing, these were the industries that used huge swathes of data back in the day. Today it’s all — high volume business transactions ... The second age of data was driven by OLTP. Online Transactional Processing (OLTP) Databases proliferated upwards at the same time the first glory days of client/server computing.


Extreme Networks acquisition breathes new life into company
Overnight, the merger resulted in the network solutions provider doubling in size, both in terms of revenue and portfolio. Extreme, which had struggled to retain a 1% share of the Ethernet switching market, suddenly leapfrogged many of its competitors. The consequences of this were significant; as a $300m company operating in the switching space, Extreme simply did not have the scale to effectively market its products. However, as a $600m company, Extreme ranks fourth in the worldwide market, according to Dell’Oro Group, and now has the necessary clout to start bidding for previously unattainable business.


Researcher makes the case for DDOS attacks
Sauter goes on in some detail with the penalties under Federal law for violating this act and, no argument here, they are extreme and excessive. You can easily end up with many years in prison. This is, in fact, a problem generally true of Federal law, the number of crimes under which has grown insanely in the last 30 or so years, with the penalties growing proportionately. For an informed and intelligent rant on the problem I recommend Three Felonies a Day by Harvey Silverglate. Back to hacktivist DDOS attacks.



Quote for the day:

"Learn to pause... or nothing worthwhile will catch up to you. Prepare your mind to receive the best that life has to offer." -- Anonymous