Showing posts with label cdn. Show all posts
Showing posts with label cdn. Show all posts

Daily Tech Digest - March 01, 2025


Quote for the day:

"Your life does not get better by chance, it gets better by change." -- Jim Rohn


Two AI developer strategies: Hire engineers or let AI do the work

Philip Walsh, director analyst in Gartner’s software engineering practice, said that from his vantage point he sees “two contrasting signals: some leaders, like Marc Benioff at Salesforce, suggest they may not need as many engineers due to AI’s impact, while others — Alibaba being a prime example — are actively scaling their technical teams and specifically hiring for AI-oriented roles.” In practice, he said, Gartner believes AI is far more likely to expand the need for software engineering talent. “AI adoption in software development is early and uneven,” he said, “and most large enterprises are still early in deploying AI for software development — especially beyond pilots or small-scale trials.” Walsh noted that, while there is a lot of interest in AI-based coding assistants (Gartner sees roughly 80% of large enterprises piloting or deploying them), actual active usage among developers is often much lower. “Many organizations report usage rates of 30% or less among those who have access to these tools,” he said, adding that the most common tools are not yet generating sufficient productivity gains to generate cost savings or headcount reductions. He said, “current solutions often require strong human supervision to avoid errors or endless loops. Even as these technologies mature over the next two to three years, human expertise will remain critical.”


The Great AI shift: The rise of ‘services as software’

Today, AI is pushing the envelope by turning services built to be used by humans as ‘self-serve’ utilities into automatically-running software solutions that execute autonomously—a paradigm shift the venture capital world, in particular, has termed ‘Services as Software’ ... The shift is already conspicuous across industries. AI tools like Harvey AI are transforming the legal and compliance sector by analysing case law and generating legal briefs, essentially replacing human research assistants. The customer support ecosystem that once required large human teams in call centres now handles significant query volumes daily with AI chatbots and virtual agents. ... The AI-driven shift brings into question the traditional notion of availing an ‘expert service’. Software development,legal, and financial services are all coveted industries where workers are considered ‘experts’ delivering specialised services. The human role will undergo tremendous redefinition and will require calibrated re-skilling. ... Businesses won't simply replace SaaS with AI-powered tools; they will build the company's processes and systems around these new systems. Instead of hiring marketing agencies, companies will use AI to generate dynamic marketing and advertising campaigns. Businesses will rely on AI-driven quality assurance and control instead of outsourcing software testing, Quality Assurance, and Quality Control.


Resilience, Observability and Unintended Consequences of Automation

Instead of thinking of replacing work that humans might make or do, it's augmenting that work. And how do we make it easier for us to do these kinds of jobs? And that might be writing code, that might be deploying it, that might be tackling incidents when they come up, but understanding what the fancy, nerdy academic jargon for this is joint cognitive systems. But thinking instead of replacement or our functional allocation, another good nerdy academic term, we'll give you this piece, we'll give the humans those pieces. How do we have a joint system where that automation is really supporting the work of the humans in this complex system? And in particular, how do you allow them to troubleshoot that, to introspect that, to actually understand and to have even maybe the very nerdy versions of this research lay out possible ways of thinking about what can these computers do to help us? ... We could go monolith to microservices, we could go pick your digital transformation. How long did that take you? And how much care did you put into that? Maybe some of it was too long or too bureaucratic or what have you, but I would argue that we tend to YOLO internal developer technology way faster and way looser than we do with the things that actually make us money as that is the perception, the things that actually make us money.


The Modern CDN Means Complex Decisions for Developers

“Developers should not have to be experts on how to scale an application; that should just be automatic. But equally, they should not have to be experts on where to serve an application to stay compliant with all these different patchworks of requirements; that should be more or less automatic,” Engates argues. “You should be able to flip a few switches and say ‘I need to be XYZ compliant in these countries,’ and the policy should then flow across that network and orchestrate where traffic is encrypted and where it’s served and where it’s delivered and what constraints are around it.” ... Along with the physical constraint of the speed of light and the rise of data protection and compliance regimes, Alexander also highlights the challenge of costs as something developers want modern CDNs to help them with. “Egress fees between clouds are one of the artificial barriers put in place,” he claims. That can be 10%, 20% or even 30% of overall cloud spend. “People can’t build the application that they want, they can’t optimize, because of some of these taxes that are added on moving data around.” Update patterns aren’t always straightforward either. Take a wiki like Fandom, where Fastly founder and CTO Artur Bergman was previously CTO. 


A Comprehensive Look at OSINT

Cybersecurity professionals within corporations rely on public data to identify emerging phishing campaigns, data breaches, or malicious activity targeting their brand. Investigative journalists and academic researchers turn to OSINT for fact-checking, identifying new leads, and gathering reliable support for their reporting or studies. ... Avoiding OSINT or downplaying its value can leave organizations unaware of threats and opportunities that are readily discoverable to others. By failing to gather open-source data, businesses and government agencies could remain in the dark about malicious activities, negative brand impersonations, or stolen credentials circulating on forums and dark web marketplaces. In the event of a security breach or public scandal, stakeholders may view the lack of proper OSINT measures as a failure of due diligence, eroding trust and tarnishing the organization’s image. ... The primary driver behind OSINT’s growth is the vast reservoir of information generated daily by digital platforms, databases, and news outlets. This public data can be invaluable for enhancing security, improving transparency, and making more informed decisions. Security professionals, for instance, can preemptively identify threats and vulnerabilities posted openly by malicious actors. 


OT/ICS cyber threats escalate as geopolitical conflicts intensify

A persistent lack of visibility into OT environments continues to obscure the full scale of these attacks. These insights come from Dragos’ 2025 OT/ICS Cybersecurity Report, its eighth annual Year in Review, which analyzes industrial organizations’ cyber threats. .., VOLTZITE is arguably the most crucial threat group to track in critical infrastructure. Due to its dedicated focus on OT data, the group is a capable threat to ICS asset owners and operators. This group shares extensive technical overlaps with the Volt Typhoon threat group tracked by other organizations. It utilizes the same techniques as in previous years, setting up complex chains of network infrastructure to target, compromise, and steal compromising OT-relevant data—GIS data, OT network diagrams, OT operating instructions, etc.—from victim ICS organizations. ... Increasing collaboration between hacktivist groups and state-backed cyber actors has led to a hybrid threat model where hacktivists amplify state objectives, either directly or through shared infrastructure and intelligence. State actors increasingly look to exploit hacktivist groups as proxies to conduct deniable cyber operations, allowing for more aggressive attacks with reduced attribution risks.


Leveraging AR & VR for Remote Maintenance in Industrial IoT

AR tools like Microsoft’s HoloLens 2 are enabling workers on-site to receive real-time guidance from experts located anywhere in the world. Using AR glasses or headsets, on-site personnel can share their view with remote technicians, who can then overlay instructions, schematics, or step-by-step troubleshooting guidance directly onto the worker’s field of vision. This allows maintenance teams to resolve issues faster and more accurately, without the need for travel, reducing downtime and operational costs. ... By using VR simulations, workers can familiarize themselves with equipment, troubleshoot issues, and practice responses to emergencies, all in a virtual setting. This hands-on experience builds confidence and competence, ultimately improving safety and efficiency when dealing with real equipment. As IIoT systems become more sophisticated, VR training can play a key role in ensuring that the workforce is well-prepared to handle advanced technologies without risking costly mistakes or accidents. ... In the future, we can expect even more seamless integration between AR/VR systems and IIoT platforms, where real-time data from sensors and machines is directly fed into the AR/VR environment, providing a comprehensive view of machine health, performance and issues. 


Just as DNA defines an organism’s identity, business continuity must be deeply embedded in every aspect of your organization. It is more than just a collection of emergency plans or procedures; it embodies a philosophy that ensures not only survival during disruptions, but long-term sustainability as well. ... An organization without continuity is like a tree without roots—fragile and vulnerable to the slightest shock. Continuity serves as an anchor, allowing organizations to navigate crises while staying aligned with their strategic goals. Any organization that aims to grow and thrive must take a proactive approach to continuity. Continuity strategies and initiatives can be seen as the roots of a tree, natural extensions that provide stability and sustain growth. ... It is essential that both leaders and team members possess the experience and skills needed to execute their work effectively. ... Thoroughly assess your key vulnerabilities. This involves two primary methods: a BIA, which analyzes the impacts of a disturbance over time to determine recovery priorities, resource requirements, and appropriate responses; and risk analysis, which identifies risks tied to prioritized activities and critical resources. Together, these two approaches offer a comprehensive understanding of your organization’s pain points.


Keep Your Network Safe From the Double Trouble of a ‘Compound Physical-Cyber Threat'

This phenomenon, a “compound physical-cyber threat,” where a cyberattack is intentionally launched around a heatwave or hurricane, for example, would have outsized and potentially devastating effects on businesses, communities, and entire economies, according to a 2024 study led by researchers at Johns Hopkins University. “Cyber-attacks are more disruptive when infrastructure components face stresses beyond normal operating conditions,” the study asserted. Businesses and their IT and risk management people would be wise to take notice, because both cyberattacks and weather-related disasters are increasing in frequency and in the cost they exact from their victims. ... Take what you learn from the risk assessment to develop a detailed plan that outlines the steps your organization intends to take to preserve cybersecurity, business continuity, and network connectivity during a crisis. Whether you’re a B2B or B2C organization, your customers, employees, suppliers and other stakeholders expect your business to be “always on,” 24/7/365. How will you keep the lights on, the lines of communications open, and your network insulated from cyberattack during a disaster? 


‘It Won’t Happen to Us:’ The Dangerous Mindset Minimizing Crisis Preparation

The main mistakes in crisis situations include companies staying silent and not releasing official statements from management, creating a vacuum of information and promoting the spread of rumors. ... First and foremost, companies should not underestimate the importance of communication, especially when things are not going well. During a crisis, many companies prefer to sit quietly and wait without informing or sharing anything about their measures and actions in connection with the crisis. This is the wrong approach. Silence gives competitors enough space to thrive and gain a market advantage. Meanwhile, journalists won’t stop working on hot stories. When you don’t share anything meaningful with them or your audience, they may collect and publish rumors and misinformation about your company. And the lack of comments creates the ground for negative interpretations. Therefore, transparency and efficiency are key principles of anti-crisis communication. If you are clear in your messages and give quick responses, it allows the company to control the information agenda. The surefire way to gain and maintain trust is to promptly and regularly inform your company’s investors during a crisis through your own channels. 

Daily Tech Digest - November 04, 2020

Reworking the Taxonomy for Richer Risk Assessments

With pre-assessment and planning, you need to think about the desired outcome (i.e., identify the risks to the facility) and identify the necessary actions to mitigate or eliminate the risks and associated vulnerabilities. The flow chart above is a detailed view of this phase and includes collecting and digesting documents, identifying the team members and the necessary skill sets, and getting ready for travel. Of course, contacting the "customer" and setting up the necessary on-site logistics are important. ... Don't forget these threats and vulnerabilities can be cyber or physical. They can also be part of the site management and culture. What about training or lack thereof? They can all contribute to the risk profile of the facility. The graphic above offers some elements of the on-site activities. You can see that we have inspections, observations, taking photographs, and looking at the site network and architecture. Even a cyber-vulnerability scan may be part of the site assessment. These activities are intended to be part of the site assessment plan. However, don't let the plan place barriers on your site risk reviews. Feel free to follow leads and evidence of problems, since that is why you are on-site rather than doing a remote risk assessment via Zoom.


How blockchain is set to revolutionize the healthcare sector

Despite its potential, data portability across multiple systems and services is a real issue. There is nothing more valuable and personal to an individual than their personal medical records, so making data shareable across services will inevitably raise concerns around the spectre of data being misused. Currently, data does not flow seamlessly across technology solutions within healthcare. For example, in the UK your hospital records do not form part of your GP records, but the advantages are clear in terms of treatment and preventative care were they to do so. Unfortunately, it is not likely a centralised storage and delivery system will get traction until there is one that can ensure the appropriate encryption and security. The risks are simply too high. Yet, it is an issue that a technology like blockchain can tackle. This is because the purpose of the chain is to store a series of transactions in a way that cannot be altered or changed. What renders it immutable is the combination of two opposing things: the cryptography and its openness. Each transaction is signed with a private key and then distributed amongst a peer to peer set of participants. Without a valid signature, new blocks created by data changes are ignored and not added to the chain. 


UX Patterns: Stale-While-Revalidate

Stale-while-revalidate (SWR) caching strategies provide faster feedback to the user of web applications, while still allowing eventual consistency. Faster feedback reduces the necessity to show spinners and may result in better-perceived user experience. ... Developers may also usestale-while-revalidate strategies in single-page applications that make use of dynamic APIs. In such applications, oftentimes a large part of the application state comes from remotely stored data (the source of truth). As that remote data may be changed by other actors, fetching it anew on each request guarantees to always return the freshest data available. Stale-while-revalidate strategies substitute the requirement to always have the latest data for that of having the latest data eventually. The mechanism works in single-page applications in a similar way as in HTTP requests. The application sends a request to the API server endpoint for the first time, caches and returns the resulting response. The next time the application will make the same request, the cached response will be returned immediately, while simultaneously the request will proceed asynchronously. When the response is received, the cache is updated, with the appropriate changes to the UI taking place.


The Inevitable Rise of Intelligence in the Edge Ecosystem

Edge computing is becoming an integral part of the distributed computing model, says Nishith Pathak, global CTO for analytics and emerging technology with DXC Technology. He says there is ample opportunity to employ edge computing across industry verticals that require near real-time interactions. “Edge computing now mimics the public cloud,” Pathak says, in some ways offering localized versions of cloud capabilities regarding compute, the network, and storage. Benefits of edge-based computing include avoiding latency issues, he says, and anonymizing data so only relevant information moves to the cloud. This is possible because “a humungous amount of data” can be processed and analyzed by devices at the edge, Pathak says. This includes connected cars, smart cities, drones, wearables, and other internet of things applications that consume on demand compute. The population of devices and scope of infrastructure that support the edge are expected to accelerate, says Jeff Loucks, executive director of Deloitte’s center for technology, media and telecommunications. He says implementations of the new communications standard have exceeded initial predictions that there would be 100 private 5G network deployments by the end of 2020. “I think that’s going to be closer to 1,000,” he says.


Take a Dip into Windows Containers with OpenShift 4.6

Windows Operating System in a container? Who would have thought?!? If you asked me that question a few years back, I would have told you with conviction that it would never happen! But if you ask me now, I will answer you with a big, emphatic yes and even show you how to do so!In this article, I will demonstrate how you can run Windows workloads in OpenShift 4.6 by deploying a Windows container on a Windows worker node. In addition, I will then highlight some of the issues and challenges that I see from a system administrator perspective. ... For customers who have heterogeneous environments with a mix of Linux and Windows workloads, the announcement of a supported Windows container feature on OpenShift 4.6 is exciting news. As of this writing, the supported workloads to run on Windows containers can be either .NET core applications, traditional .NET framework applications, or other Windows applications that run on a Windows server. So when did the work start to make Windows containers possible to run on top of OpenShift? In 2018, Red Hat and Microsoft announced the joint engineering collaboration with the goal of bringing a supported Windows containers feature into OpenShift.


GPS and water don't mix. So scientists have found a new way to navigate under the sea

Underwater devices already exist, for example to be fitted on whales as trackers, but they typically act as sound emitters. The acoustic signals produced are intercepted by a receiver that in turn can figure out the origin of the sound. Such devices require batteries to function, which means that they need to be replaced regularly – and when it is a migrating whale wearing the tracker, that is no simple task. On the other hand, the UBL system developed by MIT's team reflects signals, rather than emits them. The technology builds on so-called piezoelectric materials, which produce a small electrical charge in response to vibrations. This electrical charge can be used by the device to reflect the vibration back to the direction from which it came. In the researchers' system, therefore, a transmitter sends sound waves through water towards a piezoelectric sensor. The acoustic signals, when they hit the device, trigger the material to store an electrical charge, which is then used to reflect a wave back to a receiver. Based on how long it takes for the sound wave to reflect off the sensor and return, the receiver can calculate the distance to the UBL.  "In contrast to traditional underwater acoustic communication systems, which require each sensor to generate its own signals, backscatter nodes communicate by simply reflecting acoustic signals in the environment," said the researchers.


Temporal Tackles Microservice Reliability Headaches

Temporal consists of a programming framework (or SDK) and a managed service (or backend). The core abstraction in Temporal is a fault-oblivious stateful Workflow with business logic expressed as code. The state of the Workflow code, including local variables and threads it creates, is immune to process and Temporal service failures. Temporal supports the programming languages Java and Go, but has SDKs in the works for Ruby, Python, Node.js, C#/.NET, Swift, Haskell, Rust, C++ and PHP. In the event of a failure while running a Workflow, state is fully restored to the line in the code where the failure occurred and the process continues without developer intervention. One of the restrictions on Workflow code, however, is that it must produce exactly the same result each time it is executed, which rules out external API calls. Those must be handled through what it calls Activities, which the Workflow orchestrates. An activity is a function or an object method in one of the supported languages, stored in task queues until an available worker invokes its implementation function. When the function returns, the worker reports its result to the Temporal service, which then reports to the Workflow about completion.


The Cybersecurity Myths We Hear Ourselves Saying

There is a widely held belief — including from 19% of respondents — that the brands you can trust won't take advantage of you and that they will protect your data, as they surely do everyone else's data. However, the reality is that almost all mainstream sites are collecting data about you, and if they're not profiting off that data themselves, then there is a very good chance that hackers are. The more sites you go to, even trusted ones, the more cookies that are held in your browser. What's more, by surfing to numerous sites, not only are you providing more data about yourself, but you're also providing more pools of data that are being held by the various sites you visit. Applying basic theories of probability, increasing the number of pools increases the probability that any one of them will be breached. The hard truth is that the only way to effectively ensure privacy is to disconnect from the internet. Failing that, another good way to protect data is by encrypting internet traffic history by using a VPN. A VPN adds an extra layer of encrypted protection to a secured Wi-Fi network, preventing corporate agents from tracking you while you're online.


Running React Applications at the Edge with Cloudflare Workers

Cloudflare Workers are a cool technology introduced by Cloudflare a couple of years ago. Normally, you might have a server living in a data center somewhere in the world. You’ll likely put a CDN in front of that to handle caching and manage the load. But imagine having the power of a server directly inside your CDN’s data center. This is what Cloudflare Workers offers —a way to execute code directly at the edge of the CDN. This is a really powerful way to manage and modify requests going to and from your origin server—but it also opens up a whole new set of possibilities: instead of paying for and managing your own server, you can use Cloudflare Workers as your origin. This means lightning-fast responses directly at the edge without a round trip to another data center. ... These patterns are what inspired Flareact. Cloudflare Workers offers a Workers Sites feature that allows you to host a static site on top of Cloudflare Workers, with assets stored in a KV [Key/Value] store at the edge. This, combined with the underlying Workers dynamic platform, seemed like the perfect use case for Next.js. However, due to technical constraints, it proved too difficult to get Next.js working on Cloudflare Workers. So I set out to build my own framework modeled after Next.js.


The future is female: overcoming the challenges of being a woman in tech

Self-doubt affects everyone, but being in an industry in which you are outnumbered by thopposite gender is particularly tough. According to TrustRadius, three out of four tech professionals have experienced imposter syndrome at work, but women are 22% more likely than men to feel this way. Sheryl Sandberg even said that women in tech “hold ourselves back in ways both big and small, by lacking self-confidence, by not raising our hands, and by pulling back when we should be leaning in.” This is unsurprising, as women are typically taught not to brag from an early age. Self marketing might feel egotistical and uncomfortable at first but it definitely feels more natural with practice! Confidence comes with knowledge; with technology constantly evolving as new software and systems are created, women making their way in tech should continue to learn as much as possible. Being on top of new developments will get you noticed and make it easier to advocate for yourself. But, if you don’t feel comfortable selling yourself, let others do this for you. Ask trusted clients, colleagues and contacts to give testimonials – many will be delighted to do so – and sing the praises of those around you, as people will return the favour.



Quote for the day:

"The problem with being a leader is that you're never sure if you're being followed or chased." -- Claire A. Murray

Daily Tech Digest - April 14, 2020

Microsoft and Google delay online authentication change


Companies are gradually replacing this method with more modern protocols. Microsoft and Google are both shifting to OAuth 2.0, which uses tokens to authenticate applications with online services, and gives them an expiry date. That way, an application stays authorised for a predefined period, minimising the need to exchange credentials. This also makes it easier to implement multi-factor authentication (MFA). Microsoft announced that it would switch off Basic Authentication in its Exchange Web Services (EWS) API for Office 365 back in July 2018. It planned to turn off support for the feature entirely on 13 October 2021. At the same time, it also advised developers to begin moving away from this API, instead using Microsoft Graph, which is its newer API for accessing back-end cloud services such as Exchange Online. It also expanded those plans in September 2019, announcing that it would turn off Basic Authentication in Exchange Online for Exchange ActiveSync (EAS), POP, IMAP and Remote PowerShell.



How to achieve agile DevOps: a disruptive necessity for transformation

How to achieve agile DevOps: a disruptive necessity for transformation image
Organisations have accept that the transition to agile DevOps is going to be disruptive, but entirely necessary for effective and sustainable transformation. According to Erica Langhi, EMEA senior solutions architect at Red Hat, “the best way to mitigate this disruption is through transparency and openness — businesses need to make the benefits of this transition clear to their teams. After that, they should encourage their developers and operations teams to look at how other parts of the business are working.” After this, leaders will need to look at the company’s culture “and start making the tweaks necessary to promote collaboration and communication between teams; this isn’t optional, as nine out of ten organisations that try to make the change to DevOps without changing their culture and structure will fail,” she advised. Overall, to create a maximally agile DevOps, organisation’s “should also invest in a few other technologies and cultural changes. DevOps in fact brings together people, processes, and technology for better efficiency. ...” Langhi continued.


Defining the Database Requirements of Dynamic JAMstack Applications


To understand why multi-region distribution is desirable, let’s revisit why static websites on CDNs are incredibly fast. A CDN is fast to deliver your content because it contains copies of your content at different locations. When content is requested from the CDN from a specific location, it will attempt to deliver that content from the closest location to the requestor. In order to get an idea of how much that matters, take a glance at the Zeit CDN status page which shows you the difference in latency between your current location and other locations. By deploying our applications to a CDN, our pages automatically load from the closest location to the user, which results in low loading latencies. And low latencies result in a great user experience. In order to keep this user experience, the dynamic data that will be loaded from our APIs has to exhibit low latencies as well, and the best way to achieve this is to use a distributed database.


Talking Digital Future: Blockchain Technology

Talking Digital Future: Blockchain Technology
Indeed, the United Nations World Food Program, for example, is serving an incredibly large number of people. And we want the highest amount of good resources to go to those people — so they are. The U.N. did a first round of experimentation on blockchain so it could track the flow of aid from source to destination, and it was very successful. Now, it’s in the second or third round of expanding it. I think I like this technology because it directly and positively impacts human beings. This is probably one of my favorite cases at the moment. Another one is the real estate registries. Very often these are paper-based. I think about New Orleans when Hurricane Katrina came a few years ago. The city was flooded, and it was a complete disaster. It was a terrible tragedy. When the water subsided and the city was getting back on its feet, lots of houses were destroyed and the city had to find the titles for the homes. Well, they were destroyed because they were in boxes and the papers were in the basement of a building that was flooded. So, they had a lot of difficulty for a very long time identifying which properties belonged to who, and then how they could sell the properties.


Edge computing vs. cloud computing: What's the difference?

CIO Edge computing myths
Real-time performance is one of the main reasons for using an edge computing architecture, but not the only one. Edge computing can also help prevent overloading network backbones by processing more data locally and sending to the cloud only data that needs to go to the cloud. There could also be security, privacy, and data sovereignty advantages to keeping more data close to the source rather than shipping it to a centralized location. There are plenty of challenges ahead for edge computing, however. A recent Gartner report, How to Overcome Four Major Challenges in Edge Computing, suggests “through 2022, 50 percent of edge computing solutions that worked as proofs of concept (POCs) will fail to scale for production use.” Those who pursue the promise of edge computing need to be prepared to tackle all the usual issues associated with technologies that still need to prove themselves – best practices for edge system management, governance, integration, and so on have yet to be defined.


Enterprises regard the cloud as critical for innovation, but struggle with security


Only a little over half (58%) said their organization has clear guidelines and policies in place for developers building applications and operating in the public cloud. And of those, 25% said these policies are not enforced, while 17% confirmed their organization lacks clear guidelines entirely. “Enterprises believe they must choose between innovation and security—a false choice we see manifested in the results of this report, as well as in conversations with our customers and prospects,” said Brian Johnson, CEO at DivvyCloud. “Only 35% of respondents do not believe security impedes developers’ self-service access to best-in-class cloud services to drive innovation—meaning 65% believe they must choose between giving developers self-service access to tools that fuel innovation and remaining secure. “The truth is, security issues in the cloud can be avoided. By employing the necessary people, processes, and systems at the same time as cloud adoption, enterprises can reap the benefits of the cloud while ensuring continuous security and compliance.”


Developers: Getting ahead is about more than programming languages


From a career perspective, IT professionals will often reach a point where they have to choose between becoming a technical specialist or moving down the management path. But even for those on the management path it is incredibly important that they stay up to date with what is new in tech as it becomes all too easy to fall out of step, he said. Gill says another trend within the IT industry is for companies to become more customer-focused in how they develop their products and services. In light of this, ambitious IT professionals must develop an understanding of the clients' needs as well as the intricacies of the code. "They should discuss requirements directly with them where possible or else with their points of contact within their own organisation, such as sales or business development. Having direct feedback and input from clients means the IT professionals will have a far greater chance of delivering something that will meet their needs," says Gill. Malcolm Lowe, head of IT at Transport for Greater Manchester (TfGM), is another tech chief who believes focusing on the needs of the user is the key to career-development success. He advises other IT professionals to couch everything they do in business outcomes and user needs – because, at the end of the day, that's what you're providing.


How to build a DevSecOps strategy

How to build a DevSecOps strategy image
Almost every DevOps guide talks about implementing the practice at a cultural level, and the same is true with DevSecOps. Developers tend to be incredibly creative and talented people who take a lot of pride in what they do. Get out of their way and allow them to grow. Think of it as future-proofing your security design through a more holistic approach. That’s precisely why the first step on this list is training and educating team members. When given a chance, they will work to further their skills and experience. They will also take everything they learn and incorporate it into the code and content they’re creating. It’s all about giving them the tools they need to succeed, which will only further improve the end product. ... Most likely, there are projects and segments already in place, and your teams created existing code with a different method. Don’t look at this as a negative or obstacle. It provides an excellent opportunity to revisit the foundations of a system to implement the protective armour we’re discussing.


As cybersecurity concerns grow, so does need for security professionals


For people who already work in IT but choose to refocus their energies in the area of cybersecurity, the switch can be lucrative. Job-market analytics company Burning Glass Technologies has been tracking the cybersecurity job market since 2013. In its June 2019 report, it states that the number of cybersecurity job postings has grown 94% since 2013, compared to only 30% for IT positions overall. This growth is three times faster than the overall IT market. Burning Glass’s research shows that cybersecurity jobs account for 13% of all IT jobs. On average, however, cybersecurity jobs take 20% longer to fill than other IT jobs and pay 16% more. This works out to an average of $12,700 more per year. According to the U.S. Bureau of Labor Statistics, the average salary for an information security analyst is $98,350. Analysts plan and carry out security measures to protect an organization’s computer networks and systems. “Their responsibilities continually expand as the number of cyberattacks increases,” Li says.


What Is A Data Passport: Building Trust, Data Privacy And Security In The Cloud


Data passport technology is based on classic mainframe technology, which today, can include full encryption of your data, to ensure that every piece of data is encrypted. When each piece of data is encrypted, even if it is stolen, it can’t be used.  Data passports allow you to extend the encryption technology that used to be only available on a physical mainframe to cloud computing. Each piece of data in the cloud has a passport assigned to it, and with the passport, you can verify if the data is misused, if the passport is still valid, etc. These data passports also give companies the ability to protect data and revoke access to it at any time, across a multi-cloud environment. Because the data carries its passport — and its encryption — with it, it will help enterprises secure their data wherever it travels. And that's the most significant development that makes data passports so unique and important: the protection and enforcement of data privacy and security are available on and off any given platform as it travels with the data.



Quote for the day:



"Leaders must know where they are going if they expect others to willingly join them on the journey." -- Kouzes & Posner


Daily Tech Digest - March 12, 2019

The 3 surprising secrets that drive innovation in the digital era

3 lightbulbs with the number three
It's inevitable, hear the word innovation, and you immediately start thinking about technology. After all, innovation and technology have been nearly synonymous for most of the last two decades. This inclination is even more likely if you're an IT professional, given our natural fondness of technology. But if you want to transform your organization into an innovation machine, the place to start is with the recognition that innovation is not, in fact, about technology at all. ... The way Ubels discussed what the company was doing was illuminating. “I love technology, but it’s about building better buildings for the world,” he explained during a subsequent conversation we had on the subject. “It’s healthy, sustainable, the best working environment for employees. There’s a war for talent and a building is an important part of how you express yourself as an organization and a building that people like to go to.” Here was the person responsible for the technology at a company that had made technology a central component to its value proposition — and there was almost no talk about technology either from the keynote stage or during our conversation.



5 steps to performing an effective data security risk assessment

A threat is anything that has the potential to cause harm to the valuable data assets of a business. The threats companies face include natural disasters, power failure, system failure, accidental insider actions (such as accidental deletion of an important file), malicious insider actions (such as a rogue agent gaining membership to a privileged security group), and malicious outsider actions (such as phishing attacks, malware, spoofing, etc.). Each company should have its central risk team determine the most probable threats and plan accordingly. ... A vulnerability is a weakness or gap in a company's network, systems, applications, or even processes which can be exploited to negatively impact the business. Vulnerabilities can be physical in nature (such as old and outdated equipment), they can involve weak system configurations (such as leaving a system unpatched or not following the principle of least privilege), or they can result from awareness issues. Similar to determining threats, analyzing vulnerabilities is also best completed by the central risk team.


The buzz at RSA 2019: Cloud security, network security, managed services and more

The buzz at RSA 2019: Cloud security, network security and more
Remember a few years ago when we were all shocked by dual exhibition floors in Moscone north and south? Well, the RSA conference addressed this by making one contiguous show floor in and between both buildings. Why so many vendors? Because every individual technology in the security technology stack is in play, driven by things like machine learning algorithms, cloud-based resources, automation, managed services components, etc. All these vendors may be a boon to industry trade shows, but they are confusing the heck out of cybersecurity pros. Instead of buzz words and hyperbole, successful vendors will invest in user education and thought leadership, offering guidance and support for customers and prospects. ... Large cybersecurity vendors are jumping on this trend with integrated cybersecurity technology platforms and moving toward enterprise license agreements and subscription-based pricing. Many of the vendors I met with are now tracking multi-product deals and incenting direct sales and distributors in this direction.


Applying Artificial Intelligence in the Agile World

There are a growing number of customer service software products that let you combine your existing knowledge base support with chatbots to provide pre-canned and self learning responses to customer queries. This is a great way to get started with experimenting with self learning capabilities. Recommendation systems as popularised by Netflix’s movie recommendation feature have made significant advancements in recent years. These can be easily integrated into existing systems to add self learning capabilities. For example, collaborative filtering systems can collect and analyze users' behavioral information in the form of their feedback, ratings, preferences, and feature usage. Based on this information, these systems exploit similarities amongst several users to suggest user recommendations. The emergence of operational chatbots as popularised by github’s open source project hubot are changing the traditional operations paradigm. Work that previously happened offline is now being brought into chat rooms using communication tools such as slack.


Cloud monitoring, management tools come up short


Cost and complexity were the top reasons given for cloud-monitoring failures. Forty-five percent said cloud support required additional software licenses or network monitoring tool modules, which they didn’t want to pay for. Forty-four percent indicated that cloud support in their tools was too difficult to implement or use. They simply couldn’t get value out of the updated tools. “Due to complacency and limitations of the software itself, we had to get rid of [a tool],” one IT executive at a North American distributor of heavy, manufactured products told EMA. “It’s not worth the time and investment. We didn’t want to spend more money on a new version that was just a redux of an older version. I didn’t see any real progress in the product.” Furthermore, 35 percent said their vendors had done a poor job of adding cloud-monitoring support to their tools, with the functional updates failing to meet their needs. And 28 percent said their vendors had failed to even establish a roadmap for cloud monitoring.


2019’s Most Inquired Professional Services Marketplace Model

Be it the medical services, freelancing services, travel or hospitality services to name a few. In whichever specifics the services marketplace may be, it’s prime role is to connect the people with service providers. Thumbtack, TaskRabbit, Handy.com and many more service marketplaces are becoming routine names for people. It literally took a good ten years for the customers to warm up with the idea of services marketplaces. With experiencing a lot many varied economic models, the services marketplace industry has undergone several phases of evolution. On this note, it becomes vital for companies to have a killer business model to lead and survive in the competition marathon. A number of businesses have recognized the essential aspects that contribute to design a lucrative business model. This blog gives a firsthand look to these key elements of the professional services marketplace model that’s pretty perfect for a services marketplace.


Get started with natural language processing APIs in cloud


With the popularity of voice assistant technologies, natural language processing APIs and similar services have become one of the most in demand -- and better understood -- subdisciplines of AI. There are decades of research to support the field, and it's used in countless products to analyze speech and text for language and sentiment, improve the ability to search unstructured data and even parse intent from conversations as they happen. Natural language processing has only recently become affordable enough to productize for the general public. Today, it is so commonplace that the major cloud providers -- as well as a number of smaller players -- offer it as a service. Each vendor has its own feature set to process natural, human-readable text. Let's review some of the most prominent natural language processing APIs and cloud-based services, as well as ways developers can incorporate them into applications.


Why CISOs Need Partners for Security Success

More and more CISOs are buying into the strategy of involving members of the C Suite as well as other leaders in key projects, Pescatore said. For instance, CISOs at power plants and other large manufacturing facilities are working with COOs to show how business results are affected when systems are offline due to a ransomware attack or another type of cyberattack, clearly demonstrating why there's a need for better security to improve reliability and resistance in the face of an interruption. ... The security team may not understand the goals of the development team and may lack the skills to keep up with the rapid pace of application development, Pescatore explained. "So the slowdown is really two things," Pescatore told me after his presentation. "The first is not understanding how the business works. It's about saying no to everything when sometimes there's no risk that anyone will care about. The second is skills - the security team might not be up to the task of going as fast as the other side."


How to shop for CDN services

nw how to shop for cdn shopping cart
Content delivery networks are the transparent backbone of the Internet that bring users every piece of content to their PCs or mobile browsers – from news stories to shopping sites to live-streaming video. For more than a decade, a content delivery network’s primary mission has been to reduce latency by shortening the distance between a website’s visitor and its server. Today, however, the stakes are much higher. Skyrocketing streaming demands, growing consumer impatience, spikes in global live viewership, and shifting device preferences are all changing CDN services, according to a study by streaming platform Conviva. Its users’ overall viewing hours increased 89 percent in 2018, including a 165 percent jump in streaming TV viewership in the fourth quarter alone, according to the study. Live content drove much of the surges, including a 217 percent spike in U.S. news watching during November’s mid-term elections.  At the same time, rising expectations about video streaming quality have viewers more impatient than ever.


How AWS, Azure and Google approach service mesh technology


Some users only want service mesh connectivity and load balancing for their microservices. Here, Microsoft users will want to consider Azure Service Fabric. It supports deployment on other public clouds, which makes it the top service mesh for multi-cloud. Also consider Google's Kubernetes Engine and Istio, particularly if you're a Kubernetes shop. Amazon's basic service mesh tools are great for AWS users, but less versatile in multi- and hybrid cloud deployments. The middle ground, where most users will probably find themselves, is a bit more difficult to read at this point. Microsoft and Google have signaled they'll support a fairly portable service mesh vision via Azure Service Fabric and Google's Kubernetes-Istio combination, respectively. Amazon's middle ground is still divided and somewhat primitive compared to its competitors, which likely means more upgrades are on the way. In the long run, service mesh, managed container services and even serverless are likely to converge into a single uniform resource model for applications.



Quote for the day:


"Perhaps the ultimate test of a leader is not what you are able to do in the here and now - but instead what continues to grow long after you're gone" -- Tom Rath