Daily Tech Digest - June 26, 2021

DevOps requires a modern approach to application security

As software development has picked up speed, organizations have deployed automation to keep up, but many are having trouble working out the security testing aspect of it. Current application security testing tools tend to scan everything all the time, overwhelming and overloading teams with too much information. If you look at all the tools within a CI pipeline, there are tools from multiple vendors, including open-source tools that are able to work separately, but together in an automated fashion while integrating with other systems like ticketing tools. “Application security really needs to make that shift in the same manner to be more more fine-grained, more service-oriented, more modular and more automated,” said Carey. Intelligent orchestration and correlation is a new approach being used to manage security tests, reduce the overwhelming amount of information and let developers focus on what really matters: the application. While the use of orchestration and correlation solutions are not uncommon on the IT operations side for things like network security and runtime security, they are just beginning to cross into the application development and security side of things, Carey explained.


Databricks cofounder’s next act: Shining a Ray on serverless autoscaling

Simply stated, Ray provides an API for building distributed applications. It enables any developer working on a laptop to deploy a model on a serverless environment, where deployment and autoscaling are automated under the covers. It delivers a serverless experience without requiring the developer to sign up for a specific cloud serverless service or know anything about setting up and running such infrastructure. A Ray cluster consists of a head node and a set of worker nodes that can work on any infrastructure, on-premises or in a public cloud. Its capabilities include an autoscaler that introspects pending tasks, and then activates the minimum number of nodes to run them, and monitors execution to ramp up more nodes or close them down. There is some assembly required, however, as the developer needs to register to compute instance types. Ray can start and stop VMs in the cloud of choice; the ray docs provide information about how to do this in each of the major clouds and Kubernetes. One would be forgiven for getting a sense that Ray is déjà vu all over again. Stoica, who was instrumental in fostering Spark's emergence, is taking on a similar role with Ray. 


Akka Serverless is really the first of its kind

Akka Serverless provides a data-centric backend application architecture that can handle the huge volume of data required to support today’s cloud native applications with extremely high performance. The result is a new developer model, providing increased velocity for the business in a highly cost-effective manner leveraging existing developers and serverless cloud infrastructure. Another huge bonus of this new distributed state architecture is that, in the same way as serverless infrastructure offerings allow businesses to not worry about servers, Akka Serverless eliminates the need for databases, caches, and message brokers to be developer-level concerns. ... Developers can express their data structure in code and the way Akka Serverless works makes it very straightforward to think about the “bounded context” and model their services in that way too. With Akka Serverless we tightly integrate the building blocks to build highly scalable and extremely performant services, but we do so in a way that allows developers to write “what” they want to connect to and let the platform handle the “how”. As a best practice you want microservices to communicate asynchronously using message brokers, but you don’t want all developers to have to figure out how to connect to them and interact with them. 


Windows 11 enables security by design from the chip to the cloud

The Trusted Platform Module (TPM) is a chip that is either integrated into your PC’s motherboard or added separately into the CPU. Its purpose is to help protect encryption keys, user credentials, and other sensitive data behind a hardware barrier so that malware and attackers can’t access or tamper with that data. PCs of the future need this modern hardware root-of-trust to help protect from both common and sophisticated attacks like ransomware and more sophisticated attacks from nation-states. Requiring the TPM 2.0 elevates the standard for hardware security by requiring that built-in root-of-trust. TPM 2.0 is a critical building block for providing security with Windows Hello and BitLocker to help customers better protect their identities and data. In addition, for many enterprise customers, TPMs help facilitate Zero Trust security by providing a secure element for attesting to the health of devices. Windows 11 also has out of the box support for Azure-based Microsoft Azure Attestation (MAA) bringing hardware-based Zero Trust to the forefront of security, allowing customers to enforce Zero Trust policies when accessing sensitive resources in the cloud with supported mobile device managements (MDMs) like Intune or on-premises.


Switcheo — Zilliqa bridge will be a game-changer for BUILDers & HODlers!

Currently, a vast majority of blockchains operate in silos. This means that many blockchains can only read, transact, and access data within a singular blockchain. This limits blockchain user experience and hinders user adoption. Without interoperability, we have individual ecosystems where users and developers have to choose which blockchain to interact with. Once they choose a blockchain, they are limited to using its features and offerings. Not the most decentralised environment to build on right? No blockchain should be an island — and working alone doesn’t end well. We need to stay connected to different protocols so ideas, dApps and users can travel across platforms conveniently. With interoperability, users and developers can seamlessly transact with multiple blockchains and benefit from those cross-chain ecosystems, application offerings in areas like decentralised finance (DeFi), gaming, supply chain logistics, etc. The list goes on. Interoperability creates the ability for users and developers to not be stuck having to choose one blockchain over another, but rather, they can benefit from multiple chains being able to interlink.


JSON vs. XML: Is One Really Better Than the Other?

Despite resolving very similar purposes, there are some critical differences between JSON and XML. Distinguishing both can help decide when to opt for one or the other and understand which is the best alternative according to specific needs and goals. First, as previously mentioned, while XML is a markup language, JSON, on the other hand, is a data format. One of the most significant advantages of using JSON is that the file size is smaller; thus, transferring data is faster than XML. Moreover, since JSON is compact and very easy to read, the files look cleaner and more organized without empty tags and data. The simplicity of its structure and minimal syntax makes JSON easier to be used and read by humans. Contrarily, XML is often characterized for its complexity and old-fashioned standard due to the tag structure that makes files bigger and harder to read. However, JSON vs. XML is not entirely a fair comparison. JSON is often wrongly perceived as a substitute for XML, but while JSON is a great choice to make simple data transfers, it does not perform any processing or computation. 


How to Build Your Own Blockchain in NodeJS

It can be helpful to think of blockchains as augmented linked lists, or arrays in which each element points to the preceding array. Within each block (equivalent to an element in an array) of the blockchain, there contains at least the following: A timestamp of when the block was added to the chain; Some sort of relevant data. In the case of a cryptocurrency, this data would store transactions, but blockchains can be helpful in storing much more than just transactions for a cryptocurrency; The encrypted hash of the block that precedes it; and An encrypted hash based on the data contained within the block(Including the hash of the previous block). The key component that makes a blockchain so powerful is that embedded in each block's hash is the data of the previous block (stored through the previous block's hash). This means that if you alter the data of a block, you will alter its hash, and therefore invalidate the hashes of all future blocks. While this can probably be done with vanilla Javascript, for the sake of simplicity we are going to be making a Node.js script and be taking advantage of Node.js's built-in Crypto package to calculate our hashes.


5 Practices to Improve Your Programming Skills

Programmers have to write better code to impress hardware and other programmers (by writing clean code). We have to write code that will perform well in time and space factors to impress hardware. There are indeed several approaches to solve the same software engineering problem. The performance-first way motivates you to select the most practical and well-performing solution. Performance is still crucial regardless of modern hardware because accumulated minor performance issues may affect badly for the whole software system in the future. Implementing hardware-friendly solutions requires computer science fundamentals knowledge. The reason is that computer science fundamentals teach us about how to use the right data structures and algorithms. Choosing the right data structures and algorithms is the key to success behind every complex software engineering project. Some performance problems could stay hidden in the codebase. Besides, your performance test suite may not cover those scenarios. Your goal should be to apply performance patches when you spot such a problem always.


Containers Vs. Bare Metal, VMs and Serverless for DevOps

The workhorse of IT is the computer server on which software application stacks run. The server consists of an operating system, computing, memory, storage and network access capabilities; often referred to as a computer machine or just “machine.” A bare metal machine is a dedicated server using dedicated hardware. Data centers have many bare metal servers that are racked and stacked in clusters, all interconnected through switches and routers. Human and automated users of a data center access the machines through access servers, high security firewalls and load balancers. The virtual machine introduced an operating system simulation layer between the bare metal server’s operating system and the application, so one bare metal server can support more than one application stack with a variety of operating systems. This provides a layer of abstraction that allows the servers in a data center to be software-configured and repurposed on demand. In this way, a virtual machine can be scaled horizontally, by configuring multiple parallel machines, or vertically, by configuring machines to allocate more power to a virtual machine.


Debunking Three Myths About Event-Driven Architecture

Event-driven applications are often criticized for being hard to understand when it comes to execution flow. Their asynchronous and loosely coupled nature made it difficult to trace the control flow of an application. For example, an event producer does not know where the events it is producing will end up. Similarly, the event consumer has no idea who produced the event. Without the right documentation, it is hard to understand the architecture as a whole. Standards like AsyncAPI and CloudEvents help document event-driven applications in terms of listing exposed asynchronous operations with the structure of messages they produce or consume and the event brokers they are associated with. The AsyncAPI specification produces machine-readable documentation for event-driven APIs, just as the Open API Specification does for REST-based APIs. It documents event producers and consumers of an application, along with the events they exchange. This provides a single source of truth for the application in terms of control flow. Apart from that, the specification can be used to generate the implementation code and the validation logic.



Quote for the day:

"Leadership is being the first egg in the omelet." -- Jarod Kintz

Daily Tech Digest - June 24, 2021

The pressure is on for technologists as they face their defining moment

For IT and business leaders, the message is clear. Technologists remain fully committed to the cause – they are desperate to have a positive impact, guide their organizations through the current crisis and leave a legacy of innovation. But it’s simply not sustainable (or fair) to ask technologists to continue as they are, when 91% say that they need to find a better work-life balance in 2021. As an industry and as business leaders, we need to be doing more to manage workload and stress, and protect wellbeing and mental health. Technologists have to be given more support to deal with the heightened level of complexity in which they are now operating. That means having access to the right tools, data, and resources, and organisations protecting their wellbeing, both inside and outside working hours. In 2018, we revealed that 9% of technologists were operating as Agents of Transformation – elite technologists with the skills, vision and ambition to deliver innovation within their organisations – but that organisations needed five times as many technologists to be performing at that level in order to compete over the next ten years.


5 Characteristics of Modern Enterprise Architect

Agile thinking is essential for not just the enterprise architect but many other IT jobs as well. However, enterprise architecture is one field in which it is essential. Agile thinking doesn’t just mean thinking fast, it means thinking fast and right. Agile thinking means that you have to adapt to situations as you improve your models and solutions. Being an agile thinker is key to having a successful career as a modern enterprise architect. As the market conditions change rapidly, you must adapt to all the changes and make the solutions robust. Data-Driven Decision Makers use facts and logic from the available information to make an informed decision. As many professionals say, everything you need is in the data available to you. Therefore, data-driven decision-making is an essential quality for any enterprise architect. This process will help identify management systems, operating routes, and much more that will align with your enterprise-level goals. One of the primary sources of data for decision-making are the users themselves. Companies usually collect data from the users in the sessions and use that data to analyze user behavior.


Best Practices To Understand And Disrupt Today’s Cybersecurity Attack Chain

Unified security must be deployed broadly and consistently across every edge. Far too many organizations now own some edge environment that is unsecured or undersecured, and cybercriminals are taking full advantage of this. The most commonly unprotected/underprotected environments include home offices, mobile workers, and IoT devices. OT environments are also often less secure than they should be, as are large hyperscale/hyper-performance data centers where security tools cannot keep up with the speed and volume of traffic requiring inspection. Security solutions also need to be integrated so they can see and talk to each other. Isolated point security products can actually decrease visibility and control, especially as threat actors begin to deliver sophisticated, multi-vector attacks that take advantage of an outdated security system’s inability to correlate threat intelligence across devices or edges in real time, or provide a consistent, coordinated response to threats. Addressing this challenge requires an integrated approach, built around a unified security platform that can be extended to every new edge environment.


rMTD: A Deception Method That Throws Attackers Off Their Game

rMTD is the process of making an existing vulnerability difficult to exploit. This can be achieved through a variety of different techniques that are either static – built in during the compilation of the application, referred as Compile Time Application Self Protection (CASP) – or dynamically enforced during runtime, referred to as Runtime Application Self Protection (RASP). CASP and RASP are not mutually exclusive and can be combined. CASP modifies the application's generated assembly code during compilation in such a way that no two compilations generate the same assembly instruction set. Hackers rely on a known assembly layout from a generated static compilation in order to craft their attack. Once they've built their attack, they can target systems with the same binaries. They leverage the static nature of the compiled application or operating system to hijack systems. This is analogous to a thief getting a copy of the same safe you have and having the time to figure out how to crack it. The only difference is that in the case of a hacker, it's a lot easier to get their hands on a copy of the software than a safe, and the vulnerability is known and published.


SREs Say AIOps Doesn’t Live Up to the Hype

Why is AIOps so slow to catch on? Ultimately, the barriers facing these tools are the same as those facing human engineers: massive and growing complexity in IT environments. As digital products become more dependent on third-party cloud services, as the number of things businesses want to track grows (from infrastructure to application to experience), the sheer volume, velocity and variety of monitoring data has exploded. ... Compounding the problem, enterprises increasingly rely on multiple “same-service” providers for IT services. That is, they use multiple cloud providers, multiple DNS providers, multiple API providers, etc. There are sound business reasons for doing so, such as adding resiliency and drawing on different vendors’ strengths in different areas. But even when two providers are doing basically the same thing, they use different interfaces and instrumentation, and their data sources often employ different metrics, data structures, and taxonomies. Whether you’re asking a human being or an AI-driven tool to solve this problem, this heterogeneity makes it extremely difficult to visualize the complete picture across the infrastructure. It also creates gray areas around how best to take advantage of each vendor’s different rules and toolsets. 


How to convince your boss that cybersecurity includes Active Directory

Here’s the punchline: Everything relies on Active Directory. To get your boss to care, start with a discussion about operations and which parts are business critical. Have a business-level discussion, with you keeping score at a technical level. For example, when your boss says “Development needs to be running 100 percent of the time,” you work backward through all the systems, applications, and endpoints that need AD to function. Repeat this until you have a sufficient list of critical workloads and business operations that require AD be secure and functional. Next, talk about which of those environments need to be protected, which contain sensitive data, and which need to be resilient against a cyberattack. Let your boss talk while you just sit back, smile, and check off the boxes of everything that relies heavily on AD. Once you are armed with enough business ammo, have the technical discussion about how each of the business functions listed by your boss rely on AD to provide users access to data, applications, systems, and environments.


Next.js 11: The ‘Kubernetes’ of Frontend Development

The main innovation behind this is that Vercel has placed the entire dev server technology, that before lived in a node process on your local machine, entirely in the web browser, Rauch said. “So, all the technology for transforming the front-end UI components is now entirely ‘dogfooded’ inside the web browser, and that’s giving us the next milestone in terms of developer performance,” he said. “It makes front-end development multiplayer instead of single player.” Moreover, by tapping into ServiceWorker, WebAssembly and ES Modules technology, Vercel makes everything that’s possible when you run Next.js on a local machine possible in the context of a remote collaboration. Next.js Live also works when offline and eliminates the need to run or operate remote virtual machines. Meanwhile, the Aurora team in the Google Chrome unit has been working on technology to advance Next.js and has delivered Conformance for Next.js and the Next.js Script Component. Rauch described Conformance as a co-pilot that helps the developer stay within certain guardrails for performance.


From Garry Kasparov to Google: Hamsa Buvaraghan’s Journey In The World of Algorithms

My fascination with AI began when I was in India, back in 1997, when I heard about IBM’s supercomputer Deep Blue defeating Garry Kasparov. This made top headlines then. After that, I wanted to explore more about this. However, access to research papers was really hard then, as I didn’t even own a computer or have access to the internet. I got introduced to computers by my father when I got access to a computer in his office at the age of 10. First thing I explored was Lotus Notes back then. With encouragement from my parents, I later pursued Computer Science Engineering. Later, when I started working, I read several IEEE research papers. I read papers like Smart games; beyond the Deep Blue horizon, Deep Blue’s hardware-software synergy. I was fascinated not only with AI then but also the application of AI to solve real problems. I was also passionate about Biomedical Engineering, which led me to books on Neural networks & AI for Biomedical Engineering and papers on Training Neural Networks for Computer-Aided Diagnosis. When it comes to machine learning, I am largely self-taught.


The CIO's Role in Maintaining a Strong Supply Chain

Access to up-to-the-minute information is essential for a CIO who hopes to maintain a strong supply chain. "Real-time data ensures that your supply team has the proper information required to make good, reliable decisions," Roberge said. "My advice is to automate as many data points as possible -- the fewer spreadsheets the better." ... Today's supply chain cannot be managed effectively or efficiently without adequate foundational tools, Furlong cautioned. "Appropriate technologies, implemented in a timely manner, can help an organization transform the supply chain and leapfrog the competition," he explained. "This includes everything from advanced predictive analytics to ... cutting-edge technologies such as blockchain, which is being used to track shipments at a micro level." CIOs also need to regularly assess and replace aging supply chain software, hardware, and network tools with modern systems leveraging both internal resources and third-party alliances. "Business requirements are changing rapidly, and supply chain technology ... must be flexible enough to handle complex business processes but also simplify supply chain processes," Furlong said.


What is stopping data teams from realising the full potential of their data?

Data warehouses have been a popular option since the 1980s, and revolutionised the data world we live in, enabling business intelligence tools to be plugged in to ask questions about the past, but looking at future insights is more difficult, and there are restrictions to the volume and formats of the data that can be analysed. Another option is data lakes, which on the other hand enable artificial intelligence (AI) to be utilised to ask questions about future scenarios. However, data lakes also have a weakness in that all data can be stored, cleaned and analysed, but can be quickly disorganised and become ‘data swamps’. Taking the best of both options, a new data architecture is emerging. Lakehouses are a technological breakthrough that finally allows businesses to both look to future scenarios and back to the past in the same space, at the same time, revolutionising the future of data capabilities. It’s the solution enterprises have been calling out for throughout the last decade at least; by combining the best elements of the data warehouse and data lake, the lakehouse enables enterprises to implement a superior data strategy, achieve better data management, and squeeze the full potential out of their data.



Quote for the day:

"A company is like a ship. Everyone ought to be prepared to take the helm." -- Morris Wilks

Daily Tech Digest - June 23, 2021

Take My Drift Away

Drift is a change in distribution over time. It can be measured for model inputs, outputs, and actuals. Drift can occur because your models have grown stale, bad data is flowing into your model, or even because of adversarial inputs. Now that we know what drift is, how can we keep track of it? Essentially, tracking drift in your models amounts to keeping tabs on what had changed between your reference distribution, like when you were training your model, and your current distribution (production). Models are not static. They are highly dependent on the data they are trained on. Especially in hyper-growth businesses where data is constantly evolving, accounting for drift is important to ensure your models stay relevant. Change in the input to the model is almost inevitable, and your model can’t always handle this change gracefully. Some models are resilient to minor changes in input distributions; however, as these distributions stray far from what the model saw in training, performance on the task at hand will suffer. This kind of drift is known as feature drift or data drift. It would be amazing if the only things that could change were the inputs to your model, but unfortunately, that’s not the case.


7 best practices for enterprise attack surface management

To mount a proper defense, you must understand what digital assets are exposed, where attackers will most likely target a network, and what protections are required. So, increasing attack surface visibility and building a strong representation of attack vulnerabilities is critical. The types of vulnerabilities to look for include older and less secure computers or servers, unpatched systems, outdated applications, and exposed IoT devices. Predictive modeling can help create a realistic depiction of possible events and their risks, further strengthening defense and proactive measures. Once you understand the risks, you can model what will happen before, during and after an event or breach. What kind of financial loss can you expect? What will be the reputational damage of the event? Will you lose business intelligence, trade secrets or more? “The successful [attack surface mapping] strategies are pretty straightforward: Know what you are protecting (accurate asset inventory); monitor for vulnerabilities in those assets; and use threat intelligence to know how attackers are going after those assets with those vulnerabilities,” says John Pescatore, SANS director of emerging security trends.


How Chainyard built a blockchain to bring rivals together

There’s the technology of building the blockchain, and then there’s building the network and the business around that. So there are multiple legs to the stool, and the technology is actually the easiest piece. That’s just establishing architecturally how you want to embody that network, how many nodes, how many channels, how your data is going to be structured, and how information is going to move among the blockchain. But the more interesting and challenging exercise, as is true with any network, is participation. I think it was Marc Andreessen who famously said “People are on Facebook because people are on Facebook.” You have to drive participation, so you have to consider how to bring participants to this network, how organizations can be engaged, and what’s going to make it compelling for them. What’s the value proposition? What are they going to get out of it? How do you monetize and how do you operate it? And you can’t figure that on the fly. So we went out to bring the top-of-the-food-chain organizations in various industries on board, so they can help establish the inertia for the network to take off. 


Strategies, tools, and frameworks for building an effective threat intelligence team

The big three frameworks are the Lockheed Martin Cyber Kill Chain®, the Diamond Model, and MITRE ATT&CK. If there’s a fourth, I would add VERIS, which is the framework that Verizon uses for their annual Data Breach Investigations Report. I often get asked which framework is the best, and my favorite answer as an analyst is always, “It depends on what you’re trying to accomplish.” The Diamond Model offers an amazing way for analysts to cluster activity together. It’s very simple and covers the four parts of an intrusion event. For example, if we see an adversary today using a specific malware family plus a specific domain pattern, and then we see that combination next week, the Diamond Model can help us realize those look similar. The Kill Chain framework is great for communicating how far an incident has gotten. We just saw reconnaissance or an initial phish, but did the adversary take any actions on objectives? MITRE ATT&CK is really useful if you’re trying to track down to the TTP level. What are the behaviors an adversary is using? You can also incorporate these different frameworks.


Bulding a Scalable Data Service in the Modern Microservices World

The microservices architecture not only makes the whole application much more decoupled and cohesive, it also makes the teams more agile to make frequent deployments without interrupting or depending on others. The communication among services is most commonly done using HyperText Transfer Protocol. The Request and Response format (XML or JSON) is known as API Contract and that’s what binds services together to form the complete behaviour of the application. In the given example above, we are talking about an application that serves both Web and Mobiles users, and allows external services to integrate using REST API endpoints provided to end-users. Each of the use cases have their own endpoints exposed in front of individual Load Balancers that manages Incoming Requests with best available resources. Each of the internal services contains a Web Server that handles all incoming requests and forwards them to the right services or sends it to in-house application, an Application Server that hosts all the business logic of the microservice, and a quasi-persistent layer, a Local Replication of the Database based on Spatial and/or Temporal locality of data.


Validation of Autonomous Systems

Autonomous systems have complex interactions with the real world. This raises many questions about the validation of autonomous systems: How to trace back decision making and judge afterwards about it? How to supervise learning, adaptation, and especially correct behaviors – specifically when critical corner cases are observed? Another challenge would be how to define reliability in the event of failure. With artificial intelligence and machine learning, we need to satisfy algorithmic transparency. For instance, what are the rules in an obviously not anymore algorithmically tangible neural network to determine how an autonomous system might react with several hazards at the same time? Classic traceability and regression testing will certainly not work. Rather, future verification and validation methods and tools will include more intelligence based on big data exploits, business intelligence, and their own learning, to learn and improve about software quality in a dynamic way.

The New Future Of Work Requires Greater Focus On Employee Engagement

When it comes down to it, engagement is all about employee empowerment—helping employees not just be satisfied in their work but feeling like a valued member of the team. Unfortunately 1 in 4 is planning to look for work with a new employer once the pandemic is over largely due to a lack of empowerment in the workplace—a lack of advancement, upskilling opportunities, and more. Organizations like Amazon, Salesforce, Microsoft, AT&T, Cognizant and others have started upskilling initiatives designed to help employees, wherever they are in the company, advance to new positions. These organizations are taking an active role in the lives of their employees and are helping them grow. These reasons are likely why places like Amazon repeatedly top the list for best places to work. Before the pandemic, just 24% of businesses felt employee engagement was a priority. Following the pandemic, the number hit nearly 36%. Honestly, that’s still shockingly low! It’s just common sense that engaged employees will serve a company better.


Architectural Considerations for Creating Cloud Native Applications

The ability to deploy applications with faster development cycles also opens the door to more flexible, innovative, and better-tailored solutions. All this undoubtedly positively impacts customer loyalty, increases sales, and lowers operating costs, among other factors. As we mentioned, microservices are the foundation of cloud native applications. However, their real potential can be leveraged by containers, which allows them to package the entire runtime environment and all its dependencies, libraries, binaries, etc., into a manageable, logical unit. Application services can then be transported, cloned, stored or used on-demand as required. From a developer’s perspective, the combination of microservices and containers can support the 12-Factor App methodology. This methodology aims primarily to avoid the most common problems programmers face when developing modern cloud native applications. The benefits of following the guidelines proposed by the 12 Factors methodology are innumerable.


How to be successful on the journey to the fully automated enterprise

When first embarking on automation, many businesses feel like they would like to keep their options open and use the time available to explore what automation can do for their teams and their businesses. The first step in journey to full automation is often a testing phase which relies on proving a return on investment and consequently convincing the C-suite, departmental heads, and IT of its benefits. Next, once automation has been added to the agenda, in order to support with providing a centralised view and governance, organizations should create an RPA Centre of Excellence to champion and drive use of the technology. At this stage, select processes are chosen, often in isolation, based on the fact that they have high-potential but are low-value tasks which can quickly be automated and show immediate returns in terms of increased productivity or customer satisfaction. This top-down, process-by-process approach, implemented by RPA experts, will help automation programs get off the ground. NHS Shared Business Service (SBS), for example, chose the highly labour-intensive task of maintaining cashflow files as its first large-scale automation.


SOC burnout is real: 3 preventative steps every CISO must take

While most technology solutions aim to make the SOC/IR more efficient and effective, all too often organizations take one step forward and two steps back if the solution creates ancillary workloads for the team. The first measurement of a security tool is if it addresses the pain or gap that the organization needs to fill. The second measurement is if the tool is purpose-built by experts who understand the day-to-day responsibilities of the SOC/IR team and consider those as requirements in the design of their solution. As an example, there is a trend in the network detection and response (NDR) market to hail the benefits of machine learning (ML). Yes, ML helps to identify adversary behavior faster than manual threat hunting, but at what cost? Most anomaly-based ML NDR solutions require staff to perform in-depth “detection training” for four weeks plus tedious ongoing training to attempt to make the number of false positives “manageable.” Some security vendors are redefining their software as a service (SaaS) offering as Guided-SaaS. Guided-SaaS security allows teams to focus on what matters – adversary detection and response. 



Quote for the day:

"Leaders dig into their business to learn painful realities rather than peaceful illusion." -- Orrin Woodward

Daily Tech Digest - June 22, 2021

What makes a real-time enterprise?

Being a ‘real-time’ enterprise today is typically evaluated under two criteria: the ability to capture, collect and store data as it comes in; and the ability to respond to it at the point of consumption. Analytics solutions that allow for this are highly sought after, as it’s considered a huge competitive differentiator and critical capability in our fast-paced digital world. However, while there’s much buzzword bingo about real-time data, decision-making and insight, the readiness of the enterprise to become real-time is varied due to a lack of understanding in how it practically aligns with their goals, resulting in lost opportunities and wasted resources. ... We find the sudden hurried shift among enterprises to grasp real-time analytics typically starts when organisations examine their data and see they are not making decisions fast enough to affect business outcomes. Many organisations potentially misconstrue the cause of these common analytics problems as a lack of real-time analytics capability, when there are likely several other factors at play preventing them from making decisions efficiently and effectively, such as a long and arduous analysis process, analysis fatigue and human bias resulting in accidental discovery, and a lack of guidance in understanding what the insights mean.


Does Your Cyberattack Plan Include a Crisis Communications Strategy?

During a cyberattack, one of the most overlooked — and consequential — areas for enterprises is implementing an effective crisis communications strategy. Just as you need to shore up the technology, legal, financial, and compliance aspects of your cybersecurity preparation plan, you must also prioritize crisis management and communications But where should you start? Below are five crisis communications tips to form the foundation of your strategy. ... Our media landscape is characterized by a 24/7 news cycle, ubiquitous social media channels, and misinformation powered by algorithmic artificial intelligence (AI) and delivered instantly on a global scale to billions of people. This shows no sign of abating. What does that mean? Time is not on your side. But with an actionable plan in place, you will be much better prepared. ... With your crisis communications framework in place, it is time for action. Picture this: your company is the target of a ransomware attack. And while desperately trying to address the incident, media are beginning to report the incident, citing reports on Twitter. 


How to Retain Your IT Talent

It seems easy to create an open and collaborative work culture, but in IT it can be a special challenge. This is because the nature of IT work is factual and introspective. It's easy to get buried in a project and forget to communicate status to a workmate -- or to be consumed by planning or budgeting as a CIO and forget to “walk the floor” and visit with staff members. Those heading up IT can make a conscious effort to improve open communication and engagement by setting an example of personal engagement with staff themselves. When staff members understand IT’s strategic direction because the CIO has directly communicated it to them, as well as why they are undertaking certain projects, work becomes purposeful. Team members also benefit if they know that support is available when they need it, and when they know that they can freely go to anyone's office, from the CIO on down. The net result is that people are happier at work, and less likely to leave an inclusive work culture. ... From here, training and mentoring plans for developing employee potential should be defined and followed. Career and skills development plans should be targeted for up-and-coming employees and recent hires, and also for longer-term staff who want to cross train and learn something new.


The positive levers of a digital transformation journey

It’s not just processes. People play an equally important role in the transformation exercise. Shifting from a traditional workplace to a digital one involves an overall change in the mindset of the people behind the business. A company’s culture and behaviour determine how well it can adapt to being ‘digital first’. To undertake digital transformation seamlessly, many organisations ensure transparency by communicating their expectations clearly to their employees. This transformation also helps in highlighting skill gaps within the organisation and sheds light on which of these gaps can be filled by AI and automation, allowing for the repurposing of employee intelligence. Rahul Tandon, head, digital transformation at BPCL said, “Many initiatives and developments are bringing in a lot of automation and AI with a clear objective to absolve our field teams of all repetitive transactional activities and focus solely on business development and efficient customer interactions.” This approach, he says, has infused new energy to the field teams. “We hope it will become the preferred choice for all stakeholders and eventually impact our bottom line positively.”


How to rethink risks with new cloud deployments

With microservices, you have hundreds of different functions running separately, each with their own unique purpose and triggered from different events. Each one of these functions requires its own unique authentication protocol, and that leaves room for error. Attackers will look for things like a forgotten resource or redundant code, or open APIs with known security gaps to gain access to the environment. This will then allow the attacker to gain access to a website containing sensitive content or functions, without having to authenticate properly. While the service provider will handle much of the password management and recovery workflows, it is up to the customers to make sure that the resources themselves are properly configured. However, things get more complicated when functionality is not triggered from an end-user request, but rather during the application flow, in such a way as to bypass the authentication schema. To address this issue, it is important to have continuous monitoring of your application, including the application flow, so you can identify application triggers. From there, you will want to create and categorize alerts for when resources fail to include the appropriate permissions, have redundant permissions, or the triggered behavior is anomalous or non-compliant.


How Containers Simplify DevOps Workflows and CI/CD Pipelines

DevOps has created a way to automate processes to build, test and code faster and more reliably. Continuous integration/continuous delivery (CI/CD) isn’t a novel concept, but tools like Jenkins have done much to define what a CI/CD pipeline should look like. While DevOps represents a cultural change in the organization, CI/CD is the core engine that drives the success of DevOps. With CI, teams must implement smaller changes more often, but they check the code with the version control repositories. Therefore, there is a lot more consistency in the building, packing and testing of apps, leading to better collaboration and software quality. CD begins where CI ends. Since teams work on several environments (prod, dev, test, etc.), the role of CD is to automate code deployment to these environments and execute service calls to databases and servers. The CI/CD concept isn’t entirely new, but it’s only now that we have the right tools to fully reap the benefits of CI/CD. Containers make it extremely easy to implement a CI/CD pipeline and enable a much more collaborative culture.


Automation Is a Game Changer, Not a Job Killer

While many businesses embrace the positives of digitization, employees approach these changes with far less enthusiasm. Words like “automation” and “digitization” are loaded with baggage, invoking negative associations of job loss. Employees are quick to assume the worst, fearing they’ll be left behind or eliminated. But is that fear warranted? Not so, according to BDO’s recent survey of middle market executives. The majority of companies are adding new digital enablement projects, with 34% planning to increase headcount and 42% comprehensively re-imagining job roles. Only 22% expect the use of automation to have a negative impact on headcount. In most cases, jobs are changing and evolving, requiring employees to work alongside new technologies, develop new skill sets and integrate automation into their daily work lives. But for these digital initiatives to succeed, organizations need to secure employee buy-in. Otherwise, initiatives will fall well short of reaching maximum ROI. So, how can CIOs and IT leaders change resistance into adoption and dispel unwarranted fears among the workforce?


Bugs in NVIDIA’s Jetson Chipset Opens Door to DoS Attacks, Data Theft

The most severe bug, tracked as CVE‑2021‑34372, opens the Jetson framework to a buffer-overflow attack by an adversary. According to the NVIDIA security bulletin, the attacker would need network access to a system to carry out an attack, but the company warned the vulnerability is not complex to exploit and that an adversary with little to low access rights could launch it. It added that an attack could give an adversary persistent access to components – other than the NVIDIA chipset targeted – and allow a hacker to manipulate and or sabotage a targeted system. “[The Jetson] driver contains a vulnerability in the NVIDIA OTE protocol message parsing code where an integer overflow in a malloc() size calculation leads to a buffer overflow on the heap, which might result in information disclosure, escalation of privileges and denial of service (DoS),” according to the security bulletin, posted on Friday. Oblivious transfer extensions (OTE) are low-level cryptographic algorithms used by Jetson chipsets to process private-set-intersection protocols used to secure data as the chip processes data.


How can technology design be made more inclusive?

With an increasing reliance on screens to communicate, organisations should also look to ensure that product design addresses how the software facilitates this, and make adjustments where necessary. “Brands must consider all forms of disabilities, such as vision and hearing impairments, as well as conditions like autism, at the very beginning of the design process,” said Paul Clark, senior vice-president and EMEA managing director at Poly. “At Poly, we’ve spent a lot of time making our solutions more accessible. For example, one of our customer’s employees is highly motivated to contribute but has Duchenne Muscular Dystrophy and was self-conscious about the loud, high-pitched noises that his ventilator made during calls. Poly’s NoiseBlock AI technology has been built into all of our headsets and video bars to minimise non-human sounds. Our personal video bar was able to tell that the ventilator noises were not speech and blocked them out. “Simple solutions like raised volume buttons enable the user to recognise controls by touch instead of sight. Brands should also consider ease of use and comfort for people who wear headdress, for example.


Driving network transformation with unified communications

As with most digital processes, cybersecurity remains a primary concern for businesses. With the increased use of UC platforms, such as Microsoft Teams, new security challenges are emerging. And quite often these vulnerabilities come from actions that we do not think twice about. Video recordings, for example, often contain sensitive and confidential information that could prove detrimental if discovered outside of the company. Yet, these recordings are typically stored in a server, or downloaded onto a desktop without much consideration. In addition to threats against sensitive content and data, real time collaboration can cause security weaknesses. With the right tools, criminals could acquire the necessary link to access private conferences and documents on a UC platform. Whether to simply eavesdrop or cause disruption, this breach could result in a number of consequences, both in the short and long term. Again, these calls and documents may contain confidential details which could be exploited by criminals if leaked. Disruptions to conferences will not only cause frustrations at the time, but also potentially damage the reputation of organizations.



Quote for the day:

"Keep your fears to yourself, but share your courage with others." -- Robert Louis Stevenson

Daily Tech Digest - June 21, 2021

Enterprises Face Growing Technical Debt

There is not one single factor causing technical debt. Many issues within an organization contribute to the problem, including reliance on stale technology, pressure to deliver in the short term, constant change, developer churn and incorrect architectural decisions. The top cause of technical debt is too many development languages and frameworks, cited by 52% of respondents as a big or critical problem. Legacy technology can weigh an IT department down. But, it’s not necessarily only old tech getting in the way—it could be that IT is supporting too many competing agendas.The second top cause is a high turnover rate within developer teams. In today’s competitive climate, quality engineers are in short supply, and hiring can be challenging. It is thus difficult to attract and nourish steady engineering talent. If developers frequently leave for greener pastures, especially before documenting their procedures, best practices can easily be lost and efficient use of technology is stunted. The study found that other common causes include accepting known defects to meet deadlines, using outdated programming languages and frameworks and dealing with challenges in serving new markets or segments. 


No code software — the most effective path to closing the IT skills gap

The tech sector has long been governed by a certain subset of society and has lacked diversity. According to Diversity in Tech, 15% of the tech workforce are from BAME backgrounds and gender diversity is at 19%, compared to 49% for all other jobs within the UK. Considering the tech industry is growing almost three times as rapidly as the rest of the UK economy, tech and software development is a lucratively paid and in demand industry for those with the skills. However, there is no doubt it’s exclusionary. While this is a recognised issue many are keen to rectify, movement towards change is slow on the uptake. Socioeconomic dynamics mean privileged groups prevail. Change must happen at grassroots level. If children don’t have access to devices at home, attend schools with archaic software and hardware, or aren’t equipped with a support mechanism or role models, they will find themselves on the back foot for a career in tech. Roles such as software development take time to train and prepare for, meaning they can be hard to break into without background experience. Also, the lack of gender diverse and BAME role models within the tech industry perpetuates this imbalance.


Google’s health care data-sharing partnership is a problem

Privacy concerns are not just related to the fact that stolen data could potentially harm patients and consumers, however. They are also tied to the simple reality that individuals feel as though they have no say in how their personal data is acquired, stored, and used by entities with which they have not meaningfully consented to share their information. According to the Pew Research Foundation, more than half of Americans have no clear understanding of how their data is used once it has been collected, and some 80% are concerned about how much of their data advertisers and other social media companies have collected.  ... The legitimate concerns of consumers combined with a massive and growing amount of data theft make agreements like the one between Google and HCA unwise, despite potential benefits. While the data that Google will have access to will be anonymized and secured through Google’s Cloud infrastructure, it will be stored without the consent of patients, whose deeply personal information is in question. This is because privacy laws in the United States allow hospitals to share patient information with contractors and researchers even when patients have not consented.


What Is A Convolutional Layer?

Most of the classification tasks are based on images and videos. We have seen that to perform classification tasks on images and videos; the convolutional layer plays a key role. “In mathematics, convolution is a mathematical operation of two functions such that it produces a third function that expresses how another function modifies the shape of one function.” If you try to apply the above definition, the convolution in CNN denotes the operation performed on two images which can be represented as matrices are multiplied to give an output that is used to extract features from an image. Convolution is the simple application of a filter to an input image that results in activation, and repeated application of the same filter throughout the image results in a map of activation called feature map, indicating location and strength of detected feature in an input image. .... The CNN is a special type of neural network model designed to work on images data that can be one-dimensional, two-dimensional, and sometimes three-dimensional. Their application ranges from image and video recognition, image classification, medical image analysis, computer vision and natural language processing.


Krustlet Brings WebAssembly to Kubernetes with a Rust-Based Kubelet

By enabling WebAssembly, Krustlet offers increased density, faster startup and shutdown times, smaller network and storage footprints, and all of these are features that not only support microservices but also operation on the edge and in IoT environments. In addition, WebAssembly also offers the ability to run on multiple architectures without being recompiled, has a security model that distrusts the guest by default, and can be executed by an interpreter and streamed in, meaning it can be run on the smallest of devices. “Krustlet, potentially combined with things like SUSE/Rancher’s k3s, can make inroads into IoT by providing a small-footprint extension to a Kubernetes cluster. This points to a sea change occurring in Kubernetes. When some folks at Google first wrote Kubernetes, they were thinking about clusters in the data center. But why think only in terms of the data center?” asks Butcher. “Imagine a world where the pod could be dynamically moved as close to the user as possible — down to a thermostat or a gaming console or a home router. And then, as the person left home, that app could ‘leave’ with them, hopping to the next closest place — yet still within the same cluster. Certainly, that’s tomorrow’s world, but Krustlet is a step toward realizing it.”


Top 5 Cloud Security Challenges Teams Face In 2021

Unfortunately, many teams don’t think about security, and sometimes even overall governance, until it’s too late. Whether they don’t have the budget, think they don’t yet have the scale, or it’s just not top of mind, procrastinating on cloud security can expose an organization to breaches, non-compliance, and other high-risk issues. On the flip side, organizations might have initially taken too heavy-handed of an approach and implemented such strict controls that it prevents them from fully realizing the promise of cloud and DevOps in the future. Thinking about cloud security should happen early, which includes implementing not just the right tools, but also the right processes and people. And it’s never too early to start, because security needs to be woven into your process from the beginning. ... Organizations wanting to keep on top of their cloud security need to prioritize constant education and upskilling, not just around traditional security applied to the cloud but also around industry best practices and cloud fundamentals, too. Identify team members willing to go deeper and pair them with industry experts within the organization, or take advantage of free educational tools from the major cloud providers to keep your team’s knowledge base wide and ever-evolving.


Embrace integrations and automation as you build a security program

In enterprise systems, automation refers to the ability to take a human operated task and reduce it to a data model, then create a script of code for repeatability. Compliance has typically been a labor-intensive practice. When considering the variety and amount of human labor required to meet compliance objectives, the concept of automation often cannot be broadly applied. Audit evidence collection, via an integration, lends itself well to an automated solution. This form of automation can also ensure the timeliness of evidence collection activity. However, this represents only a tiny percentage of the labor required to pass an audit. All organizations can realize benefits from automated compliance concepts by considering which tasks would traditionally require a consultant. Is that task repeatable across consultants? For example, performing an annual risk assessment. Another example is mapping exercises between an organization’s cybersecurity policies and controls against a common standard such as ISO 27001 or SOC 2. People are still required to ensure that the quality of these tasks are acceptable. 


6 Steps Companies Can Take to Strengthen Their Cyber Strategy

With more than a year of remote work for hundreds of thousands of people, many companies historically known for having on-premise based infrastructures are now shifting to multi-cloud strategies. Multi-cloud strategies are valuable because they provide the best possible cloud service for each workload. Today, our cyber security group is partnering with our digital transformation team to enable multi-cloud adoption in a way that advances and streamlines our specific business operations. Cyber leaders should develop risk controls upfront when ushering in multi-cloud strategies so that they don’t hinder the pace of adoption, while also protecting the company’s assets and data. ... Biometrics are a significant game-changer in cyber protection. It’s much harder for a threat actor to break into a system designed on behavioral attributes -- like how quickly people type, how they move their mouse, or what applications they have open -- than a system reliant on static passwords. In fact, we’re working with our data science team to pilot our own data models, leveraging new technologies available in the industry to replace passwords internally over time.


Why big data doesn’t exist — it’s all about the value

Over the past decade there has been an abundance of cases where well-known brands, which typically sit on a mammoth amount of historic data, have collapsed due to not handling it effectively. Companies including retailer Toys “R” Us, book chain Borders, and more recently, department store Debenhams, failed to optimise operations quickly enough to stay relevant in a highly competitive digital environment. Had they responded to what their data analysis was telling them, the outcome of these businesses could have been different. Adopting technology that can process and manage data as well as provide visualisations about what is happening within the organisation in real time can deliver greater insight into everything from product materials and production rates to customer shopping habits and market trends. By knowing what’s working and what’s not, businesses can make decisions based on the evidence the data shows, rather than relying on ‘gut instinct’. The pandemic is an excellent example of how the valuable data over big data can be used to drive decisions, as many businesses were forced to accelerate their digital strategies to remain viable. Management consultancy Mckinsey reports that the crisis brought about years of change to the way all companies and sectors do business


Leveraging Small Teams to Scale Agility - a Red Hat Case Study

Doing Agile does not always mean being Agile. The starting state of this group demonstrated this in the way they were working. They’d had training and had wound up with a rotating cast of characters in scrum teams with 10+ boards. After I arrived, we did another training cycle on Agile. This occurred after the team had committed to doing Agile and helped everyone to acquire the tools for success. Even with their previous challenges, like many teams new to Agile they got excited. Knowledge is power. But even helping move the team members from novice to amateur still left them struggling with concepts like capacity. They, like many teams, struggled to understand what their team capacity level was. They tended to overcommit the volume of work to be completed in each Sprint. This is where a Scrum master can support by helping guide a team to maturity as they learn to deliver value and be responsible for it as a team. It would have been impossible for me to do my work if my manager and the team didn’t trust me. I started with the trust given to me by my manager and the trust of the functional managers as a platform to build on.



Quote for the day:

"If you don't understand people, you don't understand business." -- Simon Sinek

Daily Tech Digest - June 20, 2021

The Reality Behind The AI Illusion

So far, AI has shown some impressive results in narrow application areas only, like chess-playing computers beating world chess champions and supercomputers beating human Jeopardy champions. However, these are computers programmed to solve one specific problem and cannot interpret more complex and multilayered challenges beyond the given task. This is exactly what Moravec's paradox states; though it may be easy to get computers to beat human chess champions, it may be difficult to give them the skills of a toddler when it comes to perception and mobility. While AI has not reached human performance, it brings valuable solutions to many real-world problems quickly and effectively. From enhanced healthcare, innovations in banking and improved environmental protection to self-driving vehicles, automated transportation, smart homes and chatbots, AI can offer simpler and more intelligent ways of accomplishing many of our daily tasks. But how far can AI go? Will it ever be able to function autonomously and mimic cognitive human actions? We cannot envision how AI will end up evolving in the far-off future, but at this point, humans remain smarter than any type of AI.


Mastering the Data Monetization Roadmap

The Data Monetization Roadmap provides both a benchmark and a guide to help organizations with their data monetization journey. To successfully navigate the roadmap, organizations must be prepared to traverse two critical inflection points: Inflection Point #1 is where organizations transition from data as a cost to be minimized, to data as an economic asset to be monetized; the “Prove and Expand Value” inflection point; Inflection Point #2 is where organizations master the economics of data and analytics by creating composable, reusable, and continuously refining digital assets that can scale the organization’s data monetization capabilities; the “Scale Value” inflection point. Carefully navigate these two inflection points enables organizations to fully exploit the game-changing economic characteristics of data and analytics assets – assets that never deplete, never wear out, can be used across an unlimited number of use cases at zero marginal cost, and can continuously-learn, adapt, and refine, resulting in assets that actually appreciate in value the more that they are used.


Will AI Make Interpreters and Sign Language Obsolete?

One of Google’s newest ASR NLPs is seeking to change the way we interact with others around us, broadening the scope of where — and with whom — we can communicate. The Google Interpreter Mode uses ASR to identify what you are saying, and spits out an exact translation into another language, effectively creating a conversation between foreign individuals and knocking down language barriers. Similar instant-translate tech has also been used by SayHi, which allows users to control how quickly or slowly the translation is spoken. There are still a few issues in the ASR system. Often called the AI accent gap, machines sometimes have difficulty understanding individuals with strong accents or dialects. Right now, this is being tackled on a case-by-case basis: scientists tend to use a “single accent” model, in which different algorithms are designed for different dialects or accents. For example, some companies have been experimenting with using separate ASR systems for recognizing Mexican dialects of Spanish versus Spanish dialects of Spanish. Ultimately, many of these ASR systems reflect a degree of implicit bias. In the United States, African-American Vernacular English ...


Bad cybersecurity behaviors plaguing the remote workforce

Over one quarter of employees admit they made cybersecurity mistakes — some of which compromised company security — while working from home that they say no one will ever know about. 27% say they failed to report cybersecurity mistakes because they feared facing disciplinary action or further required security training. In addition, just half of employees say they always report to IT when they receive or click on a phishing email. ... As lockdown restrictions are lifted, six in 10 IT leaders think the return to business travel will pose greater cybersecurity challenges and risks for their company. These risks could include a rise in phishing attacks whereby threat actors impersonate airlines, booking operators, hotels or even senior executives supposedly on a business trip. There is also the risk that employees accidentally leave devices on public transport or expose company data in public places. ... As cybersecurity will be mission-critical in the new work environment, it’s encouraging that 67% of surveyed IT decision makers report that they have a seat at the table when it comes to office reopening plans in their organizations.


Microsoft's new security tool will discover firmware vulnerabilities

Today, ReFirm needs you to provide the firmware files, but Microsoft plans to create a database of device information, Weston says. "You plug in CyberX and it discovers the devices, it monitors them and it asks ReFirm 'do you know anything about IoT device X or Y'. Hopefully we've pre-scanned most of those devices and we can propagate the information -- and for anything we don't have, there's the drag-and-drop interface to do a custom analysis." Having that visibility of what's on your network and whether it's safe to have on your network is a good first step. The Azure Device Updates service can already push IoT firmware updates out through Windows Update. Microsoft's bigger vision is to create a service based on Windows Update that can handle a much wider range of third-party devices, says Weston. "We're going to take Windows Update, which people already at least know and trust on Patch Tuesdays, and we want to push the IoT and edge devices into that model. Microsoft's update system is a pretty known commodity -- just about every government regulator out there looked at it in one form or another -- and so we feel good about being able to move customers towards it."


Deep Learning, XGBoost, Or Both: What Works Best For Tabular Data?

Today, XGBoost has grown into a production-quality software that can process huge swathes of data in a cluster. In the last few years, XGBoost has added multiple major features, such as support for NVIDIA GPUs as a hardware accelerator and distributed computing platforms including Apache Spark and Dask. However, there have been several claims recently that deep learning models outperformed XGBoost. To verify this claim, a team at Intel published a survey on how well deep learning works for tabular data and if XGBoost superiority is justified. The authors explored whether DL models should be a recommended option for tabular data by rigorously comparing the recent works on deep learning models to XGBoost on a variety of datasets. The study showed XGBoost outperformed DL models across a wide range of datasets and the former required less tuning. However, the paper also suggested that an ensemble of the deep models and XGBoost performs better on these datasets than XGBoost alone. For the experiments, the authors examined DL models such as TabNet, NODE, DNF-Net, 1D-CNN along with an ensemble that includes five different classifiers: TabNet, NODE, DNF-Net, 1D-CNN, and XGBoost.


Insider Versus Outsider: Navigating Top Data Loss Threats

While breaches from outside cybercriminals are becoming more complex and require more resources to combat, companies mustn’t lose sight of a data-loss cause closer to home – their employees. In their day-to-day positions, employees are entrusted with highly sensitive information, from financial and personally identifiable information (PII) to medical records or intellectual property. While employee error is a major source of security breaches, a well-trained employee who knows how to take the proper precautions is a key defense from attacks and breaches. Over the course of their daily responsibilities, employees can mistakenly share that information outside of the secure network. Often, this data loss occurs through email, such as mentioning restricted information in outside correspondence or attaching documents that may violate customer or patient privacy. For example, let’s say that an employee is working on a presentation that contains confidential data. They hit a roadblock while trying to fix a formatting issue and in their race to meet the looming deadline, they decide to reach out to a friend for help and send the presentation via email with the confidential data included.


Lawmakers Urge Private Sector to Do More on Cybersecurity

Treating cybersecurity as a core business risk and devoting the appropriate resources to it is now essential, said Tom Kellermann, head of cybersecurity strategy at software firm VMware Inc., who also sits on the Secret Service’s Cyber Investigation Advisory Board. “Cybersecurity should no longer be viewed as an expense, but a function of conducting business,” he said. Christopher Roberti, senior vice president for cyber, intelligence and supply chain security policy at the U.S. Chamber of Commerce, which says it is the world’s largest business association, said companies don’t stand a chance against determined nation-state attacks regardless of cybersecurity investments. Partnerships between the government and the private sector are essential, he said. “Businesses must take necessary steps to ensure their cyber defenses are robust and up to date, and the U.S. government must act decisively against cyber criminals to deter future attacks. Each has a role to play and both need to work closely to do more,” Mr. Roberti said.


AI Centers Of Excellence Accelerate AI Industry Adoption

It is important to note that there are several functional and operational models that enterprises are adapting in regard to CoE. The change management model focuses on emphasizing the prospective innovation that artificial intelligence can provide for business stakeholders in the organization. Central to this model is education and training of executives and business units. In addition to change management, the Sandbox approach is another central model, in which the CoE acts as the company’s hub of innovation and R&D. This model emphasizes proofs of concepts and different emerging technologies. The key is alignment of business units around POCs and being accountable for the initial launch and development of per-subject use cases. Lastly, the Launchpad model for the CoE leverages and builds upon the capabilities of existing data scientists, engineers, and developers. The CoE deploys top subject-matter experts to across departments to conduct hands-on training and education and scope out early stage business solutions.

Kubernetes: 5 tips we wish we knew sooner

“One thing that’s better to learn earlier than later with Kubernetes is that automation and audits have an interesting relationship: automation reduces human errors, while audits allow humans to address errors made by automation,” Andrade notes. You don’t want to automate a flawed process. It’s often wise to take a layered approach to container security, including automation. Examples include automating security policies governing the use of container images stored in your private registry, as well as performing automated security testing as part of your build or continuous integration process. Check out a more detailed explanation of this approach in 10 layers of container security. Kubernetes operators are another tool for automating security needs. “The really cool thing is that you can use Kubernetes operators to manage Kubernetes itself – making it easier to deliver and automate secured deployments,” as Red Hat security strategist Kirsten Newcomer explained to us. “For example, operators can manage drift, using the declarative nature of Kubernetes to reset and unsupported configuration changes.”
 


Quote for the day:

"Well, I think that - I think leadership's always been about two main things: imagination and courage." -- Paul Keating