Daily Tech Digest - May 31, 2021

How The World Is Updating Legislation in the Face Of Persistent AI Advances

Recently, 13 states across the US placed a ban on the use of facial recognition technology by the police. Interestingly, 12 of these 13 cities were democrat-elect, implying the cultural difference within a country itself. The European Union is the gold standard when we talk about data privacy and laws governing the various aspects of technology. To protect individuals’ rights and freedom the article 22 of the GDPR, “Automated individual decision making, including profiling,” has ensured the availability of manual intervention in automated decision making in cases where individual’s rights and freedoms are affected. The first paragraph, “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her,” and the third paragraph, “the data controller shall implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision,” provides the right for manual intervention to individuals.


Why ML Capabilities Of GCP Is Way Ahead Of AWS & Azure

TPUs are Google’s custom-developed application-specific integrated circuits (ASICs) to accelerate ML workloads. A big advantage for GCP is Google’s strong commitment to AI and ML. “The models that used to take weeks to train on GPU or any other hardware can put out in hours with TPU. AWS and Azure do have AI services, but to date, AWS and Azure have nothing to match the performance of the Google TPU,” said Jeevan Pandey, CTO, TelioLabs. ... Google cloud’s open-source contributions, especially in tools like Kubernetes –a portable, extensible, open-source platform for managing containerized workloads and services, facilitating declarative configuration and automation– have worked to their advantage. ... Google cloud’s speech and translate APIs are much more widely used than their counterparts. According to Gartner’s 2021 Magic Quadrant, Google cloud has been named the leader for Cloud AI services. Pre-trained ML models can be instantly used to classify objects in an image into millions of predefined categories. Additionally, one of the top ML services from Google cloud is Vision AI, powered by AutoML.


How Robotic Processing Automation can improve healthcare at scale

RPA isn’t just a boon for patient-facing organizations—healthcare vendors are getting in on the action, too. For example, the company I work for faced the daunting challenge of transferring over 1 million pieces of patient data from one EMR to another. As any medical professional can attest, switching EMRs is a notoriously time-consuming process. So, we invested in RPA to bring efficiency to an otherwise manual and laborious task. In the end, we saved valuable time—and a significant chunk of change. ... One of the biggest contributors to burnout is the ever-increasing administrative work stemming from non-clinical tasks like documentation, insurance authorizations, and scheduling—all things that can be done faster and more accurately with RPA. And when providers are freed from the monotony, they have more time to focus on the parts of the job that they really enjoy. This, in turn, boosts morale and productivity, thus enhancing care delivery and optimizing patient outcomes overall. For those working in health care, the demand for digital solutions like RPA feels like the dawning of the new era—albeit one that is met with mixed emotions.


The many lies about reducing complexity part 2: Cloud

Managers in IT are sensitive to it, as complexity generally is their biggest headache. Hence, in IT, people are in a perennial fight to make the complexity bearable. One method that has been popular for decades has been standardisation and rationalisation of the digital tools we use, a basic “let’s minimise the number of applications we use”. This was actually part 1 of this story: A tale of application rationalisation (not). That story from 2015 explains how many rationalisation efforts were partly lies. (And while we’re at it: enjoy this Dilbert cartoon that is referenced therein.) Most of the time multiple applications were replaced by a single platform (in short: a platform is software that can run other software) and the applications had to be ‘rewritten’ to work ‘inside’ that platform. So you ended up with one extra platform, the same number of applications and generally a few new extra ways of ‘programming’, specific for that platform. That doesn’t mean it is all lies. The new platform is generally dedicated to a certain type of application, which makes programming these applications simpler. But the situation is not as simple as the platform vendors argue. 


Implementing Nanoservices in ASP.NET Core

There is no precise definition of how big or small a microservice should be. Although microservice architecture can address a monolith's shortcomings, each microservice might grow large over a while. Microservice architecture is not suitable for applications of all types. Without proper planning, microservices can grow as large and cumbersome as the monolith they are meant to replace. A nanoservice is a small, self-contained, deployable, testable, and reusable component that breaks down a microservice into smaller pieces. A nanoservice, on the other hand, does not necessarily reflect an entire business function. Since they are smaller than microservices, different teams can work on multiple services at a given point in time. A nanoservice should perform one task only and expose it through an API endpoint. If you need your nanoservices to do more work for you, link them with other nanoservices. Nanoservices are not a replacement for microservices - they complement the shortcomings of microservices.


Five Data Governance Trends for Digital-Driven Business Outcomes in 2021

Knowledge of data-in-context, data processes, best techniques to provision, as well as tools enabling these methods of self-service are crucial to democratize data. However, with technology advancements, including virtualization, self-service discovery catalogs, and data delivery mechanisms, the internal data consumers can shop and provision for data in shorter cycles. In 2020, it took organizations anywhere between a week to three weeks to provision complex data that includes integration from multiple sources. Also, an increase in data awareness will help data consumers explore further available dark data that can provide predictive insights to create new user-stories that can propel customer journeys. ... A lack of focus is common across organizations as they assume Data Governance as an extension of either compliance or a risk function. Data Literacy will, in fact, change the attitude of business owners towards having to actively manage and govern data. There are immediate and cumulative benefits from actively governing data either by defining data or fixing bad quality data. But there is a need for a value-realization framework to actively manage the benefits of Data Management services.


Best practices for securing the CPaaS technology stack

Certifications are certainly important to consider when evaluating options, but even so, certifications don’t mean security. It is a best practice to check on the maturity of these vendor-specific certifications, as some companies go through a process of self-certification that doesn’t necessarily ensure the level of security your organization needs. Sending a thoughtful questionnaire to multiple vendors can be helpful for scoring these vendor’s security, offering a holistic and specific viewpoint to be considered by an organization’s IT team. On the customer end, in-house security and engineering staff can prep for CPaaS implementation by becoming familiar with the use of APIs and the authentication methods, communications protocols and the data that flows to and from them. Hackers routinely perform reconnaissance to find unprotected APIs and exploit them. Once CPaaS is incorporated into the hybrid work model technology stack, it is a best practice for an organization to focus its sights on its endpoint management. The use of a centralized endpoint management system that pushes patches for BIOS, operating systems, and applications is necessary for protecting the cloud network and customer data once a laptop connects.


3 SASE Misconceptions to Consider

Solution architecture is important, and yes, you want to minimize the number of daisy chains to reduce complexity. However, it doesn't mean you cannot have any daisy chains in your solution. In fact, dictating zero daisy chains can have consequences — not for performance, but for security. SASE consolidates a wide array of security technologies into one service, yet each of those technologies is a standalone segment today — with its own industry leaders and laggards. Any buyer who dictates "no daisy chains" is trusting that one single SASE provider can (all by itself) build the best technologies across a constellation of capabilities that is only growing larger. Being beholden to one company is not pragmatic given that the occasional daisy chain greatly increases the ability to unite best-of-breed technologies under one service provider's umbrella. ... SASE revolves around the cloud and is undoubtedly about speed and agility achieved through cloud-deployed security. But SASE doesn't mean the cloud is the only way to go and you should ignore everything else. Instead, IT leaders must take a more practical position, using the best technology given the situation and problem.


Advice for Someone Moving From SRE to Backend Engineering

The work you’re doing as an SRE will partly depend on your company culture. Without a doubt, some organizations will relegate their SREs to driving existing processes like watching the on-call make sure there are no tickets, running deployments, etc. This can make folks feel like they aren’t progressing. However, today there are a lot more things you can do as an SRE than you once could. You used to just have Bash. Now you have many automation opportunities that will hone your programming skills. You can configure Kubernetes and Terraform. There's a bunch of code-oriented tools that you can use. You can orchestrate your stuff in Python. You could also use something Shoreline if you want it, which is “programming for operations,” and allows you to think of the world in terms of control loops, and how you can automate there. DevOps has also increased the Venn diagram overlap between SRE and Backend engineering. Previously, it was engineers using version control and engineers using package managers, which was separate from SREs using deployment systems and SREs using Linux administration tools.


Inspect & Adapt – Digging into Our Foundations of Agility

When we need to change we usually feel a resistance against it. Take the current pandemic for instance. The simple action of wearing a facemask in public has caused indisputable resistance in many of us. Cognitively we understand that there is a benefit to doing so, even if there were long discussions on exactly how beneficial it would be. But emotionally it did not come natural and easy to most. Do you remember how it felt the first time you wore a facemask when entering the supermarket? It was not very pleasant, was it? But even when we are the driver for change we might find resistance against it. New year’s resolutions come to mind again. The majority of new year's resolutions are abandoned come February, even though the desired results have not been achieved. In other words, the resistance to change might sometimes show up late to the party. What might be missing here is endurance and resilience to small throw backs. I believe that we need a thorough understanding in which situation we currently are. This sounds simple and easy. And on a mid-level it is. "We need to come out of the pandemic with a net positive", a director of a company might say.



Quote for the day:

"It's very important in a leadership role not to place your ego at the foreground and not to judge everything in relationship to how your ego is fed." -- Ruth J. Simmons

Daily Tech Digest - May 30, 2021

Wanted: Millions of cybersecurity pros. Rate: Whatever you want

In the United States, there are around 879,000 cybersecurity professionals in the workforce and an unfilled need for another 359,000 workers, according to a 2020 survey by (ISC)2, an international nonprofit that offers cybersecurity training and certification programs. Globally, the gap is even larger at nearly 3.12 million unfilled positions, the group says. Its CEO, Clar Rosso, said she thinks the need may actually be higher, given that some companies put off hiring during the pandemic. The needs range from entry-level security analysts, who monitor network traffic to identify potential bad actors in a system, to executive-level leaders who can articulate to CEOs and board directors the potential financial and reputational risks from cyber attacks. The US Bureau of Labor Statistics projects "information security analyst" will be the 10th fastest growing occupation over the next decade, with an employment growth rate of 31% compared to the 4% average growth rate for all occupations. If demand for cybersecurity professionals in the private sector increases dramatically, some experts say talented workers could leave the government for more lucrative corporate jobs


100 Days To Stronger Cybersecurity For The US Electric Grid

Regardless of company size or ownership status, all organizations that support the BES are required to comply with a set of cybersecurity standards known as the North American Electric Reliability Corporation Critical Infrastructure Protection (NERC-CIP) standards. NERC-CIP defines the reliability requirements for planning, operating and protecting the North American bulk power supply system. It covers everything from identifying and categorizing assets, to implementing physical and digital security controls, to dealing with incidents and recovering from a cyber breach. As any security officer knows, “compliance” does not guarantee “security.” Even if all companies that are part of the BES are fully compliant with NERC-CIP — and that’s a big “if” — it’s still a good idea to have a group of experts examine the security controls and bring them up to date to be able to counter current threats from a variety of adversaries. The DOE’s 100-day plan states that “the initiative modernizes cybersecurity defenses and encourages owners and operators to implement measures or technology that enhance their detection, mitigation, and forensic capabilities” 


Facebook Launches AI That Understands Language Without Labels

In a recent blog post, Facebook revealed its new AI-based speech recognition technology, wav2vec-Unsupervised (or wav2vec-U), which aims to solve the problems posed by transcribing such languages. This is a method by which individuals could build speech recognition systems that do not require transcribed data. The ML algorithm still requires some form of training. Wav2vec-U is trained purely through recorded speech audio and unpaired text. This method entails first learning the structure of the target language’s speech from unlabelled audio. Using wav2vec 2.0, Facebook’s self-supervised speech recognition model, and a k-means clustering algorithm, wav2vec-U segments the voice recording into speech units loosely based on individual sounds. For instance, the word cat would correspond to the sounds: “/K/”, “/AE/”, and “/T/”. This allows it to comprehend the structure of this speech. To recognise the words in an audio recording, Facebook will use a generative adversarial network (GAN) consisting of a generator and a discriminator network. The generator will take each audio segment embedded in self-supervised representations and predict a phoneme


Why cloud governance needs to be an open source affair

Keep in mind that Cloud Custodian emerged from work Thangavelu was doing at Capital One, which is a big company with over 50,000 employees and tens of billions in revenue. It was a laboratory primed to help Thangavelu "service the different needs from different groups within the enterprise: audit, risk, security, application teams, lines of business," he said. That helped make Cloud Custodian incredibly useful within his enterprise. But just one enterprise. Open source increased the scope and utility of Cloud Custodian beyond one company's needs. "As we've gotten to open source, that pool of use cases simply expanded," he noted. No matter how creative your product managers, they're always necessarily constrained by the needs of the business they're running. By contrast, Thangavelu continued, "Open source is the strongest way to achieve [expanded scope] because your usage and your users address a wider swath of needs than any given company has. They represent the needs of a large diverse set of interests. And they're all pulling in different directions." This push-and-pull from a growing Cloud Custodian community has made it a useful tool for organizations that may have thousands or even tens of thousands of diverse policies to manage.


The Emerging Role of Artificial Intelligence in Human Lung Imaging

Recently risen to prominence, robust AI methods outline the onset of the new era in lung image analysis. Adept at seeing and making sense of vital image-led patterns, AI tools help make the respiratory field more effective — improving diagnosis and therapeutic planning, letting pulmonologists spend extra time with patients. Hence, various attempts have been made to develop automated segmentation techniques lately. Yet, the strain on the healthcare and particularly radiology system, following the pandemic, will remain until these AI-based approaches are adopted. A major hurdle of lobe segmentation arises because different respiratory diseases affect the lung architecture in different ways. For example, COVID-19 pneumonitis would manifest on imaging very differently from pulmonary emphysema. For respiratory physicians, accurate lobar segmentations are vital in order to make treatment plans appropriately. Inaccurate lobe segmentation can give misleading information about the disease process, which can lead to erroneous treatment decisions. 


Network Monitoring: The Forgotten Cybersecurity Tool

Networks can be very complex, and many are segmented into VLANs to segregate traffic. What’s more, there are many devices on the network that can shape or route traffic depending on how the network infrastructure has been configured. “Today, networks are highly segmented, yet still interconnected; there are numerous devices, such as content filtering appliances, load balancers and so on, that all work together to shape and control network traffic,” Gridelli said. “Here, active network monitoring can verify whether or not security policies are properly in effect, and detect unauthorized changes to the network infrastructure.” Active network monitoring tools often deploy sensors, which can look into a network and report on what is happening on that network. Administrators can define policies that verify network segmentation, segregation and even the functionality of content filtering devices. By running end-to-end active network monitoring tests, it’s possible to also verify whether certain security policies, such as compliance requirements, are working as intended. Sensors can be installed on protected networks, such as those used for compliance (PCI, HIPAA, etc.)


Graphs as a foundational technology stack: Analytics, AI, and hardware

Interest is expanding as graph data takes on a role in master data management, tracking laundered money, connecting Facebook friends, and powering the search page ranker in a dominant search engine. Panama Papers researchers, NASA engineers, and Fortune 500 leaders: They all use graphs. According to Eifrem, Gartner analysts are seeing explosive growth in demand for graph. Back in 2018, about 5% of Gartner’s inquiries on AI and machine learning were about graphs. In 2019, that jumped to 20%. From 2020 until today, 50% of inquiries are about graphs. AI and machine learning are in extremely high demand, and graph is among the hottest topics in this domain. But the concept dates back to the 18th century, when Leonhard Euler laid the foundation of graph theory. Euler was a Swiss scientist and engineer whose solution to the Seven Bridges of Königsberg problem essentially invented graph theory. What Euler did was to model the bridges and the paths connecting them as nodes and edges in a graph. That formed the basis for many graph algorithms that can tackle real-world problems. Google’s PageRank is probably the best-known graph algorithm, helping score web page authority.


Adam Grant on leadership, emotional intelligence, and the value of thinking like a scientist

One of the things that scares me in a lot of organizations is how attached people become to best practices. They might’ve been the best at the time that you created them. But as the world around you changes, as your culture evolves, what was best five years, 10 years ago may not be what’s most productive today. I think the language of best practices creates this illusion that there’s an end point, that we’ve already reached perfection. And so we don’t need to change anything. What I would love to see more organizations do instead is to strive for better practices, right? To say, “Okay, you know what? No matter how good a practice becomes it can always be improved. And we’re open to trying whatever ideas you have for trying to evolve the way that we do things around here.” ... When you see what other people are feeling, that’s information about what their motivations are, what’s occupying a lot of their energy and attention. Without that information, you’re actually handicapped as a leader.


Not as complex as we thought: Cyberattacks on operational technology are on the rise

The "low-hanging fruit" many attackers are going for are graphical user interfaces (GUI) -- including human machine interfaces (HMI) -- which are, by design, intended to be simple user interfaces for controlling complex industrial processes. As a result, threat actors are able to "modify control variables without prior knowledge of a process," Mandiant says. Another trend of note is hacktivism, propelled by widely available and free tutorials online. Recently, the researchers have seen hacktivist groups bragging in anti-Israel/pro-Palestine social media posts that they have compromised Israeli OT assets in the renewable and mining sectors. Other low-skilled threat actors appear to be focused on notoriety, however, with little knowledge of what they are targeting. In two separate cases, threat actors bragged about hijacking a German rail control system -- only for it to be a command station for model train sets -- and in another, a group claimed they had broken into an Israeli "gas" system, but it was nothing more than a kitchen ventilation system in a restaurant. 


Evolutionary Architecture from an Organizational Perspective

Business and IT must work together to understand the business environment and adapt the architecture accordingly. Only then is the feedback loop between the new customer’s needs and a created solution short enough to evolve architecture in the right direction. The delivery team directly listens to the client’s needs and proposes a solution. Therefore, our architecture evolves naturally with the overall business. There isn’t an additional layer of communication that slows down accommodating the change. When the architecture doesn’t correspond to the business environment, we can remodel architecture much more quickly. Additionally, the delivery team works more closely with the clients. They understand their needs. Based on that, the evolution of the system becomes more business-oriented. We don’t create architecture for the sake of the architecture -- we create a spine for the overall business goal. This idea of empowered teams is shown in detail in the book Empowered by Marty Cagan and Chris Jones. A team is responsible for gathering clients’ needs, discovering the right solution, implementing it, and gathering feedback.



Quote for the day:

"Leaders must know where they are going if they expect others to willingly join them on the journey." -- Kouzes & Posner

Daily Tech Digest - May 29, 2021

TSA’s pipeline cybersecurity directive is just a first step experts say

This new regulation requires that designated pipeline security companies report cybersecurity incidents to the DHS's Cybersecurity and Infrastructure Security Agency (CISA) no later than 12 hours after a cybersecurity incident is identified. The TSA estimates that about 100 companies in the US would fall under the directive's mandates. Pipeline owners and operators must also designate a cybersecurity coordinator who is required to be available to TSA and CISA 24/7 to coordinate cybersecurity practices and address any incidents that arise. Finally, pipeline owners and operators must "review their current activities against TSA's recommendations for pipeline cybersecurity to assess cyber risks, identify any gaps, develop remediation measures, and report the results to TSA and CISA." Although not appearing anywhere in the directive, pipeline companies that fail to meet the security requirements would be subject to financial fines, starting at $7,000 per day, government officials say. ... In its press release announcing the directive, the TSA said "it is also considering follow-on mandatory measures that will further support the pipeline industry in enhancing its cybersecurity and that strengthen the public-private partnership so critical to the cybersecurity of our homeland."


The Limits to Blockchain Scalability

There are two ways to try to scale a blockchain: fundamental technical improvements, and simply increasing the parameters. ... Unfortunately, there are many subtle reasons why this approach is fundamentally flawed. Computers running blockchain nodes cannot spend 100% of CPU power validating the chain; they need a large safety margin to resist unexpected DoS attacks, they need spare capacity for tasks like processing transactions in the mempool, and you don't want running a node on a computer to make that computer unusable for any other applications at the same time. Bandwidth similarly has overhead: a 10 MB/s connection does NOT mean you can have a 10 megabyte block every second! A 1-5 megabyte block every 12 seconds, maybe. And it is the same with storage. Increasing hardware requirements for running a node and limiting node-running to specialized actors is not a solution. For a blockchain to be decentralized, it's crucially important for regular users to be able to run a node, and to have a culture where running nodes is a common activity.


Telcos back Artificial Intelligence, Internet of Things for 5G in India

The drivers that may trigger IoT applications, according to him, include low cost of storage and computing data on the cloud platform, emerging edge computing trends, falling costs of data, sensors, devices, and availability of mobile app development platforms. Following the Covid-19 pandemic, IoT is expected to drive significant transformation in the healthcare sector. "Hospital drug and waste management, robotic surgery, real-time health monitoring and diagnostics via IoT will stand to witness increased adoption." Bharti Airtel is working with the Swedish gear maker Ericsson on aerial drones for security and surveillance purposes, and dropping of relief material in emergency situations. Billionaire Mukesh Ambani-owned Reliance Jio together with Korean Samsung Networks has been working on virtual classrooms, and previously demonstrated high-definition content streaming. Kochhar feels that bringing futuristic technologies such as AR and VR to classrooms may redefine education and skilling of students. "AR and VR require higher bandwidth, lower latency and network resiliency. ..." 


Implementing a digital transformation at industrial companies

Before pursuing digital opportunities, leaders must first develop and align on a digital vision for their organization, looking at both the overall digital strategy and value proposition for their companies. They should begin by assessing their capabilities, estimating the resources required, and contemplating potential partnerships that could help them achieve their goals. Other practical issues include the feasibility of the proposed initiatives and their potential value. The basic question underlying all strategic plans is this: How can digital help us transform core business processes or generate new opportunities? When developing the road map, industrial companies should consider the strategic implications for the incumbent business, including disruptions to any offline distribution channels as digital sales grow. Companies should also address the inevitable channel conflicts in the strategic road map by acknowledging the risks, evaluating the potential impact, and creating a path forward to mitigate any issues. For instance, companies should determine what roles they expect the distributors to play with the new digital channels. Some may decide to eliminate distributors and conduct all business through e-commerce while others may keep offline and online channels. 


Can You Build a Machine Learning Model to Monitor Another Model?

Can you train a machine learning model to predict your model’s mistakes? Nothing stops you from trying. But chances are, you are better off without it. We’ve seen this idea suggested more than once. It sounds reasonable on the surface. Machine learning models make mistakes. Let us take these mistakes and train another model to predict the missteps of the first one! Sort of a “trust detector,” based on learnings from how our model did in the past. ... In regression problems, sometimes you can build a “watchdog” model. This happens when your original model optimizes the prediction error, taking into account its sign. If the second “watchdog” model is predicting an absolute error instead, it might get something more out of the dataset. But here is a thing: if it works, this does not tell that the model is “wrong” or how to correct it. Instead, it is an indirect way to evaluate the uncertainty of data inputs. (Here is a whole paper that explores this in detail). In practice, this returns us to the same alternative solution. Instead of training the second model, let’s check if the input data belongs to the same distributions!


4 robotic process automation project fails to avoid

Many organizations select "low-hanging-fruit" RPA initiatives without a true analysis of their workflows and how those affect other processes. Most businesses are stumped by a deceptively simple question: Which are the right processes for automation? Determining where to start with your RPA program is critical to success. Using advanced process mining and discovery tools to do a thorough analysis of your business processes will give you a "digital twin" of how they currently work and let you know which are best suited for digital transformation. ... RPA on its own cannot understand unstructured documents, so you need AI-enabled bots with content intelligence. In this way, bots can carry out tasks such as reading a document; categorizing, routing, extracting, and validating data from it; and doing other tasks related to understanding and processing unstructured content. Using content intelligence with RPA can speed your processes and ready your organization to add more experiential opportunities to engage with customers via interactive mobile apps, cognitive virtual assistants that combine voice and conversational AI, and chatbots.


Blue Prism 7 shifts focus from RPA to programmable digital workers

“Scaling intelligent automation within the cloud and enabling increased demand will be the ultimate differentiator in a year of significant growth for the market,” Blue Prism CEO and chair Jason Kingdon told VentureBeat. While other RPA vendors have aimed to improve the technical characteristics of RPA infrastructure, Blue Prism has focused on improving the programmability, manageability, and integration of RPA infrastructure. Technical infrastructure efforts are important, as RPA’s original focus on making it easier to simulate user interaction with applications often incurred infrastructure scaling liabilities. Focus is shifting, however, as the major RPA vendors explore different approaches to scaling people’s ability to quickly create new automations with appropriate guardrails. That is key to Blue Prism’s recent efforts. A pointed criticism of traditional approaches to RPA — despite what the name implies — has centered around their focus on automating tasks rather than processes. “We looked at how to automate the process of programming not just tasks, but an entire digital workforce end-to-end, and that guided our redesign of Blue Prism’s platform for V7,” Kingdon said.


SolarWinds hackers resurface to attack government agencies and think tanks

The group behind the infamous SolarWinds hacks is on another cyberattack spree, this time targeting not just government agencies but others as well. In a report published Thursday, Microsoft revealed that the threat actor Nobelium launched a series of attacks this past week against government agencies, think tanks, consultants, and non-governmental organizations. More than 25% of the victims were involved in international development, humanitarian and human rights work, according to Microsoft. Affecting more than 150 different organizations, the attacks targeted 3,000 separate email accounts. Many of the attacks were blocked automatically by security software, with Microsoft's Windows Defender catching the malware used to try to compromise the organizations. Identifying the culprit as Nobelium, Microsoft pointed out that this is the same group behind the SolarWinds hack in 2020. Those attacks, which exploited a security hole in a SolarWinds monitoring tool, hit different government agencies and were deemed to be sponsored by Russia. Microsoft called the latest incident a continuation of different information gathering efforts by Nobelium to target government agencies involved in foreign policy.


Surviving Automation: It's Now Coming for White-Collar Workers

Adapting and expanding one’s skillset is one tactic for avoiding redundancy via automation. “Seek out any training available, either internally — many firms provide internal training — or via well-regarded sites such as Coursera, Data Camp, etc.,” Coker of the University of Westminster suggested. Pay attention to news and developments in your field, he said, and keep your own skills up to date accordingly. Also, the tools that allow automation to happen must be created, Edge pointed out. That involves software developers, coders, UI/UX professionals, yes — but it also requires expertise from those with deep experience in a given field. One of the best ways of surviving automation in your field is to find a way to get in front of the people designing automation software in order to help them do their jobs better, Edge said. “That requires a little understanding of how software works — but more importantly, to move into those product design roles, we need to think more deeply about what we do and why.” Additionally, as companies invest more in digital transformation, there will be increased demand for professionals with experience in what that looks like in their particular industries.


Building a better you

A healthy dose of common sense — and humanity — helps when making changes. Grit and persistence only go so far. Milkman advises that “when you keep hitting a wall on a particular goal, it’s time to step back, reassess, and think about the bigger picture instead of making yourself miserable.” Don’t overengineer the solutions, either. For example, although we know that forming stable routines is key to habit formation, you must build in sufficient buffers for life events or hiccups that may make it difficult to follow your plan. Otherwise, you’ll end up disappointed and less likely to sustain your new behaviors. In one experiment, those who were rewarded for exercising on a more flexible schedule kept working out a lot more at other times, too — not only at the time they’d said was most convenient. In this instance, a more flexible approach wound up embedding a new behavior. The others, who had agreed to exercise at a fixed time and day, Milkman writes, transformed from “Routine Rachels” into “Rigid Rachels.” That is, when events made it impossible to exercise at the regular time, they didn’t compensate by exercising at other times.



Quote for the day:

"People buy into the leader before they buy into the vision." -- John C. Maxwell

Daily Tech Digest - May 28, 2021

What is a Data Lake? It is not a Data Swamp

A data lake is a place for storing large amounts of data that originate from various sources and are stored in their raw form. Important is, the heterogeneous data is neither cleaned nor transformed before the loading process. After the loading process is complete, the data is now available in a single system. In addition to structured data, a data lake also stores and manages semi-structured (CSV, logs, JSON), unstructured (e-mails, documents, reports), and binary data (video, audio, images). The list of all possible formats is of course incomplete, but I think you know what I mean. The goal is to gather all company data in one place in order to be able to quickly access the entire data stock. Users should be able to immediately create visualizations, reports, and analyses from the data. ... In order for the principle of the data lake to work efficiently for you and not result in a data swamp in which no more data can be found, the collected data must show a business added value for the future. It is very difficult for analysts to extract information from the volume of data. This is especially true when there is no metadata or tags used. Without this, it is hardly possible for analysts to assign the data.


How to Get Developer and Security Teams Aligned

The concept of policies replacing security standards builds on the idea of culture shifts. Security standards are typically just a piece of documentation saved on Confluence or GSuite somewhere. They may get examined by a developer during a mandatory annual training session, or occasionally for reference, but they aren’t dynamic and are rarely top of mind. Those responsible for enforcing such standards are normally compliance or security operations specialists, who are logically distanced from developers. Aside from low adoption rates and disruptions to Agile workflows, security standards often lead to the ‘enforcer’ becoming the bad guy. This pushes even more of a wedge between dev and security, making security feel a bit like doing your taxes (and no one wants that). If the expertise of the traditional ‘enforcer’ is shared with developers and dynamic, adaptable policies are adopted in place of rigid standards, then security simply becomes part of the workflow. Zero-trust networking is a great example of this. Zero-trust networking is probably the best way to secure your infrastructure, and it relies on expertly defined and managed policies being present through each of its 10 principles.

The evolution of the chief information officer (CIO)

Future success relies on leaders’ digital ability as much as their aptitude for uniting teams and encouraging people to embrace new technology and new ways of working at every level of the organisation. Without the leadership to make new systems and processes work and deliver against business objectives, outlay in innovation quickly becomes a source of future technical debt. Developing leadership behaviours including emotional intelligence (EQ) will help CIOs to build empathy; understanding the human impact of transition ensures teams feel heard and valued, reduce resistance to change and enables CIOs to build trust. Equally, strong communication skills will enable CIOs to speak in both the languages of data and business and use storytelling to share their vision and secure buy-in from teams and shareholders. CIOs are also protectorates. With greater concern over business threats, CIOs safeguard their organisations’ assets and future. As well as managing data governance and cyber security, they can add business value by anticipating the opportunities and risks presented by disruption.

It’s time to shift from verifying data to authenticating identity

One of the more complex vishing schemes is the man-in-the-middle attack, in which a fraudster sets up two parallel conversations between a business and its customer. The business believes it is connecting with the customer, and the customer thinks they are talking to the business — but in reality, it is the fraudster interacting with both. The fraudster might initiate the scheme by requesting the issuance of a one-time passcode via a session on the business’s website. In parallel, posing as the business, the fraudster calls the unwitting customer and, using social engineering, convinces the individual to read off the one-time passcode sent by the business. The fraudster then uses this information to log in to the customer’s account and perform unauthorized transactions. Since the fraudster was able to provide all requested data to pass each point in the verification process, access is granted. With synthetic identity fraud, criminals combine real and fake information to create a fictitious identity, which they use to open up financial accounts and make fraudulent purchases. While a false identity might seem easy to spot, the reality is much more challenging.

Serverless Computing Brings New Security Risks

Given the distributed nature of serverless functions – essentially, the reason for its flexibility and scalability – many existing security tools will provide little to no visibility, nor the control capabilities, for these computing environments. Many of the security attacks that will occur in serverless functions will be a result of misconfigurations and mistakes that happen outside the purview of the security team and due to legacy solutions which don’t translate to serverless architectures. Further, because abstracted workloads create blind spots, attackers will have more room to maneuver undetected. Serverless functions will even render some traditional DevSecOps tools less useful. Scanning tools must monitor hundreds of individual repositories instead of a single monolithic repository, while application performance monitoring (APM) tools lack security proficiency and cannot protect from the OWASP Serverless Top 10 risks. ... For many organizations, serverless architecture is a very different and unique computing environment – unlike anything they’ve experienced or had to protect before now. That reality means that organizations need a fresh approach to securing these environments and will need to look beyond the traditional tools they have in their tech stack today.


Center for Internet Security: 18 security controls you need

The Center for Internet Security has updated its set of safeguards for warding off the five most common types of attacks facing enterprise networks—web-application hacking, insider and privilege misuse, malware, ransomware, and targeted intrusions. In issuing its CIS Controls V8 this month, the organization sought to present practical and specific actions businesses can take to protect their networks and data. These range from making an inventory of enterprise assets to account management to auditing logs. In part the new version was needed to address changes to how businesses operate since V7 was issued three years ago, and those changes guided the work. “Movement to cloud-based computing, virtualization, mobility, outsourcing, work-from-home, and changing attacker tactics have been central in every discussion,” the new controls document says. CIS changed the format of the controls a bit, describing actions that should be taken to address threats and weaknesses without saying who should perform those tasks. That put the focus on the tasks without tying them to specific teams within the enterprise. The controls each come with detailed procedures for implementing them along with links to related resource.


Advantages of Cloud Computing In Banking Can’t Be Ignored

The key to successful digital banking transformation includes embracing the cloud. While there have been reservations in the past around cloud security and regulation, cloud computing solutions are becoming prevalent in the marketplace for both traditional and non-traditional financial institutions. The use of data and deployment of advanced analytics, machine learning, and artificial intelligence requires more processing power than all but the largest financial institutions posses. The good news is that there are several cloud-based solution providers, like IBM, that have created industry-specific solutions for the banking industry. According to IBM, “Organizations have an enormous opportunity to leverage cloud computing to drive innovation and improve their competitive position. Cloud computing – whether private, hybrid or public – enables organizations to be far more agile while reducing IT costs and operational expenses. In addition, cloud models enable organizations to embrace the digital transformation necessary to remain competitive in the future.”


The cybersecurity industry is guarding against all the wrong threats

Smart technology itself, increasingly being deployed across government and private sector systems, may soon create new webs of vulnerability, according to a number of leading cybersecurity researchers. The problem is twofold: Hackers will ultimately begin using artificial intelligence against systems and there is concern that an inability to quickly spot flaws in machine learning models could create even more vulnerabilities. It would be naïve to believe that criminal hackers, who have already built help desk support operations and a vast marketplace for “plug and play” intrusion tools, would not find a way to use AI for attacks. “My guess is this isn’t very far off, and we had better start thinking about its implications,” said security technologist Bruce Schneier. “As AI systems get more capable, society will cede more and more important decisions to them, which means that hacks of those systems will become more damaging.” There is also concern within the cybersecurity community that growing use of machine learning could be opening new avenues of exploit for threat actors. Adi Shamir, professor at the Weizmann Institute in Rehovot, Israel, and a co-founder of RSA, has been analyzing the fragile state of neural networks and recently published a paper on his findings.


AIOps Has a Data(Ops) Problem

There are two issues with data collection. The first is proper instrumentation. It sounds easier than it is. The entire observability, monitoring, and AIOps eco-system depend on properly instrumenting your observable sources. If your systems, devices, services, and infrastructure is not properly instrumented, then you will have data blind spots. No matter how much data you collect from certain areas, if you do not have a holistic view of all the telemetry components, you will be getting a partial view of any system. Obviously, the instrumentation depends mostly on developers. The second issue is integration. As any AIOps vendor will tell you, this probably is the most difficult part to get your AIOps solution going. The more input from varying telemetry sources, the better the insights will be. Any good AIOps solution will be able to integrate easily with the basic golden telemetry – logs, metrics, and traces. In addition, integrating with notification systems (such as OpsGenie, Pagerduty, etc.), and maybe event streams (such as Kafka, etc.) is useful as well. However, I quite often see major enterprises struggling a lot to integrate the AIOps solutions with their existing enterprise systems. 


Agile Transformation: an Integral Approach

An integral approach incorporates all of the essential perspectives, schools of thought, and methods into a unified, comprehensive and accurate framework" is a simple definition from the book. The main leverage of Integral Theory is that it provides a meta-framework for mapping other techniques, approaches, and frameworks onto. The fundamental premise of integral thinking is that any school of thought or method that has been around for any length of time must have some truth to it -- "all perspectives are true, but partial" is a Wilber quote. Integral helps us take multiple perspectives on situations, which is key for change and adaptability in a complex world, instead of getting stuck in our own, limited perspective. As Ken Wilber said to us when we interviewed him for the book -- there are two things that, above all else, make real transformation possible -- the ability to take the perspective of others, and the ability to see one’s own "seer". Both of these are fostered by using integral thinking. Doing this cuts through our confusion when we run into the challenges of existing culture and leadership mindsets when implementing agile.



Quote for the day:

"Open Leadership: the act of engaging others to influence and execute a coordinated and harmonious conclusion." -- Dan Pontefract

Daily Tech Digest - May 27, 2021

Event-driven architecture: Understanding the essential benefits

Event-driven architectures address these problems head-on. At its core, event-driven architecture relies upon facilitating inter-service communication using asynchronous messaging. In the asynchronous messaging pattern, a service sends information in the form of a discrete message to a message broker and then moves on to other activities. On the broker side, that message is consumed by one or many interested parties at their convenience. All communication happens independently and discretely. It’s a “fire and forget” interaction. However, while things do get easier by focusing on component behavior instead of managing the burdens of endpoint discovery that go with inter-service communication, there is still a good deal of complexity involved in taking a messaging approach. In an event-driven architecture, a component needs to understand the structure of an incoming message. Also, a component needs to know the format and validation rules for the message that will be emitted back to the broker when processing is complete. Addressing this scope of complexity is where a schema registry comes into play.


Prevention Is the Only Cure: The Dangers of Legacy Systems

As companies organize their first post-pandemic steps back into an office or hybrid workflow, the threat from legacy systems is greater than ever. In part, this is due to the legacy of shadow IT, in which systems or devices are introduced without explicit IT department approval. It is more than likely that the rapid shift to work from home caused an uptick in shadow IT, attack surfaces, and exposure to related vulnerabilities. Large corporations like Shell have proven time and again that they are vulnerable to these attack vectors, but they may not need to be as concerned about shadow IT as their midsize counterparts. While large staff size may increase the potential for mismanagement, major corporations are also more likely to have systems and audits in place to manage their environment and control changes. Many midsize businesses and enterprises may be less aware of weaknesses in their system that leave them exposed to shadow IT's risks. How can a firm prevent the proliferation of legacy or shadow IT? The only solution is the proper management of all aspects of IT.


Data is the Cure for What’s Ailing the Supply Chain

The data generated through an end-to-end RFID solution provides an additional key benefit: sustainability. Fact: As the decade progresses, there will be increasing pressure on companies to help achieve global climate goals, while meeting increased consumer demands. According to a recent report from McKinsey, “consumer companies will have to greatly reduce the natural and social costs of their products and services to capitalize on rising demand for them without taxing the environment or human welfare.” Because data captured through an RFID solution is utilized to increase the velocity of moving goods onto trucks, it can also be used to configure and optimize space on the truck according to the most efficient route to the destination of packages. Aggregating the data enables movement of packages along the most efficient route. Accurately boxed and shipped packages results in fewer trucks on the road and fewer airplanes in the air. Simply stated, the carbon footprint is minimized through data: simply by knowing what you have, and in what order you should be delivering it.


The evolution of the modern CISO

With remote work poised to remain a mainstay in societal patterns, and growing interest in a “work from anywhere” mentality continues, the onus to be adaptable has never been higher for CISOs. For CISOs, there’s a fine balance between continuing to make progress on strategic initiatives that will reduce risk and improve security maturity, while also being adaptable enough to stop and pivot as needed. Further, as businesses adapt to meet the growing needs of the customer, the business needs to do so with CISOs in mind in order to stop and ask the right questions to enable secure-from-the-start—such as, “Will this new technology we’re onboarding potentially open up new security gaps?” or “Does branching into new sectors open our business up to new areas of attack?” and “Could we expose our customer base to threats by switching CRM platforms?” To be able to answer these questions, CISOs need to be able to adapt across three major areas that are constantly shifting and inherently intertwined: the needs of the business and customer, the current threat landscape, and risk calculation and prioritization.


A better future of work is coming, but only if we make the right choices now

It sounds like an almost idealistic vision of work, particularly for those who have spent the majority of their careers tied to a desk with little say over how they approach or structure their role. But evidence increasingly suggests that those businesses who do offer greater flexibility, including the ability to work remotely, are likely to attract and retain the most-skilled workers. Tech workers in particular are getting more choosey about where they work. After a year of rapid digital transformation across industries, demand for professionals with cloud, cybersecurity and software development skills are peaking. Couple that with the fact that tech workers increasingly want to work remotely , and will potentially change roles if it enables them to do so, and it stands to reason that flexible-working policies are crucial to enticing top digital professionals. However businesses pursue their agendas, Wilmott and Walsh both warn that organizations risk introducing further inequalities if they don't manage the move to hybrid delicately For workers whose roles are primarily on-site, Walsh says there is room for organizations to explore how they can offer at least some of the benefits enjoyed for those who can work remotely. 


Embed Data Science Across the Enterprise

The organizational model should reflect the maturity of the AI capabilities. For organizations just getting started with AI, a centralized model typically builds critical mass. Then, it should be distributed. I like a hybrid model. Being fully decentralized is like herding cats—the technology does not get leveraged effectively. At J&J, IT owns the technology that underpins AI: cloud computing, data repositories, APIs. But we’ve embedded the data science in our functions, where it is closest to the business problems we’re looking to solve. We’ve also created a data science council, with representation from each part of the business, that oversees our portfolio, talent, and technology. ... Think of data as an asset. Very few companies are embracing data as the core of insight and decision-making. That requires spending time to understand where you’re at with data and where you want to go. Also, never underestimate the need for change management in increasing the use of AI. I could develop all the models in the world, but if no one is using them, there’s no value. You’ve got to work with the people who want to evolve.


How E-Commerce Is Being Forced to Evolve In a Post-Covid World

The entirety of the Covid-19 pandemic has served as a case study in the cascading effects of social change. Things barely imaginable just over a year ago are now a part of daily life. In the context of enterprise, these changes manifest as both shifts in consumer behavior and new challenges for businesses — none of which have been unaffected. Against this background, the role of e-commerce has grown immensely. It’s not just niche items being ordered online anymore; in 2020, over 50% of all consumers used direct-to-consumer sales channels to buy everyday items like groceries, cleaning products and other consumer-packaged goods. In fact, online grocery sales more than doubled, growing 54% last year and nearly reaching the $100 million mark. Brick-and-mortar was suffering even before the pandemic hit, but now appears to be actually on the decline, with fewer new storefronts opening in 2020 than in any of the three years prior. Even more surprisingly, a full 60% of interactions between consumers and businesses now take place online.


Keeping up with data: SaaS, unstructured data and securing it with identity

More than four out of 10 companies admitted they don’t know where all of their unstructured data is located. Nearly every company surveyed reported managing access to unstructured data as difficult, specifying numerous challenges such as too much data, a lack of single access solution for multiple repositories and lack of visibility into access – including where data lives and who owns it. It is unsurprising, given this data, that a Canalys report found companies spending record sums on cyber security in order to protect the rapid digital transformation we have experienced over the last year. 50% of European businesses stated that investing in new security technology was their highest prevention spending priority. Yet, despite these efforts and intentions, the number of successful attacks continues to be higher than ever, with Canalys reporting that “more records were compromised in just 12 months than in the previous 15 years combined.” ... Looking even more closely at the research, we can connect the dots between these findings and the rise in cloud adoption, the unstructured data that resides in the apps and systems in the cloud, and IT’s attempts at securing this monster network of information.


Everything You Need To Know About Stress Testing Your Software

Stress testing is a type of testing that verifies the reliability and stability of software applications. The goal of this kind of testing is to measure the error handling capabilities of the software to ensure that it does not crash under extremely heavy load conditions. Let us consider two scenarios where website traffic may increase: During an online sale or holiday season, a website can experience a tremendous spike in online traffic; When a blog site is mentioned by an influencer or news outlet, then the website can experience increased traffic; It is imperative to stress test a website to ensure that it can accommodate spikes in traffic, as failure to do so will result in a loss in revenue and an inconsistent brand image. To summarize, the following can be listed as some of the end goals of running a software stress test: A stress test helps to analyze which type of user data (if any) is corrupted; The test helps to determine any triggers that signal risk; It helps to identify any hardware features that can affect software endurance; Stress test helps predict failures connected to high loads.


Preparing Future Data Scientists for Jobs of Tomorrow

Cooperation between The Data Mine and the private sector has included discussions with the Central Indiana Corporate Partnership, which Ward says focuses on economic development in the state and interconnection between companies. Through his conversations with colleagues from the C-suite, Ward says many executives do seem to understand the need for data science talent and actively foster connections with universities. That could see more people with data science enter the workforce as well as give internal staff chances to grow in their careers. The Data Mine is a way for Purdue to keep up with the changing technology, infrastructure, and tools of data science, Ward says. “Think of data-driven projects as being much larger than just science or just engineering, maybe with some traditional disciplines.” ... The focus of the program is student centric, with Ward’s office in the students’ residence hall. As companies retool and think ahead for how data driven insights can lead to business impact, he says Data Mine works with students on nine-month projects over the academic year. “We’ve completely gotten away from the idea of having a 10-week internship,” Ward says. “It’s just way too short to have a substantial experience.”



Quote for the day:

"The test we must set for ourselves is not to march alone but to march in such a way that others will wish to join us." -- Hubert Humphrey

Daily Tech Digest - May 26, 2021

Reduce process bottlenecks with process advisor for Power Automate, now generally available

“Process advisor simplifies the hardest thing about process analysis – sharing details of a process across an organization. With the recording and sharing function in process advisor, it makes it very easy for process and business analysts to collaborate and to identify opportunities to optimize their automation workflows.” —Brian Hodel, Principal Power Platform Developer, T-Mobile. Learn more about T-Mobile adding RPA to their Six Sigma toolbox. “By leveraging RPA in Microsoft Power Automate we anticipate a time savings of 90 percent in processing time for our operations, and the potential to see reduced maintenance cost of up to of 20 percent, which aligns with our value of simplicity in our manufacturing work. Looking forward, we are excited to gain deeper insights into how we work and where we might benefit from automation with process advisor to streamline and digitize our operator rounds.” —Linda W. Morris, Enterprise Automation Lead, Chemours. Learn how Chemours automated SAP reducing processing time. “Process advisor allows us to gain real insights into how work is getting done.


Why Conscious Economics Is the Leadership Style of the Future

Conscious consumerism and conscious investment are not new philosophies. Conscious investment, often referred to as “impact investing,” is a market worth more than $700 billion annually with 20% projected yearly growth. As its name implies, impact or conscious investing is practiced for the intent of benefiting from better investment decisions for the good of companies, customers and society as a whole. Conscious consumerism has likewise been on the rise in recent years with more consumers choosing to shop at local smaller businesses rather than retail giants. When we combine both of these together to encompass the “conscious” aspect for investors, businesses and consumers alike, it creates the dynamic referred to as “conscious economics." I was first introduced to this term two years ago when I attended an event by the Economic Club of Canada in Toronto. The event was headlined by a conversation between former President Barack Obama and Rhiannon Rosalind, the club’s president, CEO and owner. ... In other words, as aspects of our societies change and evolve around us, so too does our own inherent psychology — particularly our motivations for making changes to our way of life.


The rise of the cloud data platform

Many companies have different terms for what I’ve termed the cloud data platform. Oracle, for example, labels it the “enterprise data management cloud.” Nutanix uses the term “enterprise cloud.” And Cloudera, which offers a platform called the Cloudera Data Platform, actually calls the category the “enterprise data cloud.” “The enterprise data cloud is incredibly important to regulated verticals like banking, telcos, life sciences and government,” Cloudera’s Hollison said. “And they don’t want, for example, to have a bespoke security and governance model for each individual analytic function.” The structure imposed on regulated organizations by, well, regulations benefited them last year, when they needed to grow their universe of data sources. But for those without a common structure to help engineers prepare and manage data from two related but separate silos found themselves wholly unprepared for the task. For them, part of the obstacle was that, almost by default, an enclosed model with its own dedicated dataset comes with all the data preparation and engineering, security, governance and MLOps it needs. 


Rise in Opportunistic Hacks and Info-Sharing Imperil Industrial Networks

Brubaker, who worked on the Mandiant incident response team for the Triton attack, says that worries him. "These actors are building expertise and willingness [to make] contact with other actors. What if they meet up with a ransomware group" and combine forces, he asks. "That would make ransomware more impactful on OT." That concerns him. Dragos' Sergio Caltagirone, vice president of threat intelligence at the ICS security firm, called the City of Oldsmar attack "the perfect example" of the type of ICS attack his firm frequently sees. It's not so much the feared, sophisticated ICS custom-malware type of attack by more well-resourced nation-state hackers, but threat actors breaking in via unknown ports left wide open on the public Internet, or weak or compromised credentials. "A network that is unprepared and indefensible, but by an organization doing their best but that's chronically under-resourced and under-funded to protect itself ... it's a confluence of [more adversaries]" going after ICS networks and a failure of these networks to operate the most basic security practices, Caltagirone says.


IoT helps make return-to-work safer

Innovatus Capital Partners, an independent adviser and portfolio-management firm, wants to ensure that as business leaders and employees return to the office their expectations for clean, safe environments are met. To that end, the firm has deployed a smart air-quality monitoring system at its offices in Illinois and Tennessee. The system combines technologies from Veea, an edge-computing company, and Wynd Technologies, a provider of portable air purifiers. “Workers re-entering the commercial office space after Covid-19 need assurance that they are in the cleanest environment possible,” says Bradley Seiden, managing director at Innovatus. “That means you have to be able to measure the environment—specifically the air quality in the environment.” The company deployed air-quality monitoring sensors throughout common areas where they collect air metrics such as mold and CO2 levels, temperature, humidity, etc. The sensors can also identify the presence of airborne particles with signatures that might indicate the presence of coronavirus and various flu strains. 


Four proactive steps to make identity governance a business priority

In many cases, employees have access privileges to company information that they don’t need. This has only proliferated during the COVID-19 pandemic. Consider all the hiring changes—millions being laid off, furloughed, adjusting to remote or hybrid work models, taking up side hustles or gig jobs, or getting new jobs as the economic dust settles. Ensuring access is revoked when employees go and that new hires only have access to what they need is an arduous task, and one that many businesses let fall to the wayside. Deprovisioning is the best way to address this problem, but revoking privileges can create IT downtime, disrupt workflow, and is another undertaking many aren’t signing up for voluntarily. But when you consider that all it takes is one disgruntled former employee or savvy hackers ready to take advantage of your loose access privileges, it’s time to get serious. Fortunately, automation can help streamline the deprovisioning process by matching privileges and access of users to the level of security those systems require. From there, the system can automatically restrict a user’s access to certain enterprise systems based on their role. 


How Data Science and Business Intelligence Can improve Strategic Planning in organizations?

Data science consist of various methods and processes that support and guide the extraction of information and knowledge from raw data. Data Science if used properly has vast applications in business. A business analyst will work with business administration and take part in EDA which is an approach to analyze datasets, summarize their main characteristics, working with the data and refining the data so that it can be put to use productively. With large amounts of data at our disposal, businesses can make better business, financial and marketing decisions. If a business has previous data of which product sold well at which time or at which locations, they can work in a way to increase sales. Big Data helps retail outlets and fast-moving consumer goods sellers a lot. With proper data, various important decisions can be made which can improve profits. Data-driven decision making has many applications. For example, in Finance, it might be figuring out the most cost-effective way to use cloud services, or hire new staff. Or it might be the cheapest way to promote a new product.


Let’s Talk Quantum – In Defense & Warfare

This article is NOT intended to showcase the dark side aspects of quantum computing, rather intension is to highlight the possible applications of this groundbreaking technology. Defense scientists of many countries are taking a closer look at the impact that Quantum Computing, Quantum Communications and IoT will have on their national security and defense. It is believed that of the two areas, Quantum Encryption and Quantum Sensors will have an enormous impact in this field in coming years. Use of quantum computers in communications that can revolutionize Underwater Warfare is of paramount importance in the defense world. The Quantum Computation and Quantum Communication will also revolutionize “Defense Logistics”. Declining cycle times, increased awareness of the situation and more efficient communication are just some of the advantages that quantum computation or quantum communication will offer in the field of Defense Logistics. Technologies like Artificial Intelligence, Virtual Reality, Augmented Reality and Blockchain are already in use to enhance defense capabilities.


Combatting Insider Threats with Keyboard Security

The human interface device, the keyboard, often overlooked when companies look to implement internal security measures, is also the place where almost all insider threats begin. Organizations need to prioritize the use of security-enhanced keyboards that can stop threats before they can even be entered into the network. Many well-known thin client manufacturers already support the use of secure mode and have integrated the necessary software for this. Recent keyboard improvements can also now provide higher security through two-factor authentication using a smart card. Keyboards can also now come equipped with a contactless card reader that can read RFID and NFC cards or tags. These new security-equipped keyboards can make an array of safety applications possible; for example, ID systems can be used for closed user groups via the keyboard, and company IDs can be easily read in. These keyboards can then be partnered with innovative mouse technology, that have integrated fingertip sensors for user authentication, to greatly improve security.


Cloudless IoT: IoT Without the Cloud

Privacy is not the only reason why you’d want to avoid the cloud. Other reasons include stability, persistence, data privacy, security, and necessity. When it comes to stability, if the Internet connection is unstable, the cloud may be difficult to reach, making the system unstable. In persistence, cloud services may go away, so avoiding the cloud may allow the IoT system to run forever without relying on the hosting company to persist. Additionally, with data privacy, sometimes data should not leave the location where it is generated and a secure network can provide fewer network connections meaning fewer attack vectors. Lastly, sometimes there just isn’t Internet access. Any cloud-based software may become unreachable if the cloud goes down. It can happen to the best of us. Any IoT solution that wholly depends on the cloud will go out if the cloud goes down. Even worse, the cloud may go away altogether. Maybe the company that runs the servers goes out of business. Or maybe it just isn’t economically viable to keep it running. This has happened many times. Sometimes the reason for not wanting to use the cloud is very simple: Internet access just isn’t available.



Quote for the day:

"Leadership is liberating people to do what is required of them in the most effective and humane way possible." -- Max DePree