Daily Tech Digest - June 06, 2021

The computer will see you now: is your therapy session about to be automated?

AI research has not improved significantly since that review, she argues. “Based on the available evidence, I’m not optimistic.” Yet she added that a personalized approach could work better. Rather than assuming a bedrock of emotional states that are universally recognizable, algorithms could be trained on a single person over many sessions, including their facial expressions, their voice and physiological measures like their heart rate, while accounting for the context of those data. Then you’d have better chances of developing reliable AI for that person, Barrett says. If such AI systems eventually can be made more effective, ethical issues still have to be addressed. In a newly published paper, Torous, Depp and others argue that, while AI has the potential to help identify mental problems more objectively, and it could even empower patients in their own treatment, first it must address issues like bias. During the training of some AI programs, when they are fed huge databases of personal information so they can learn to discern patterns in them, white people, men, higher-income people, or younger people are often overrepresented. As a result they might misinterpret unique facial features or a rare dialect.


WhatsApp Just Gave 2 Billion Users A Reason To Stay

The specter of regulation continues to hang over Facebook and its Big Tech rivals, but this has raised a different regulatory question: At what point does a privately held communication platform become a utility. Social media can be turned on or off with little consequence. But replacing regulated mobile networks with a multinational “over the top” that is used by almost everyone is a different deal. WhatsApp’s biggest victory—the reason it’s now on almost all our phones—was its displacement of SMS as the world’s most popular, most ubiquitous, messaging tool. The nearest equivalent is Apple’s iMessage in some markets, especially the U.S. But iMessage isn’t a separate platform from core, regulated messaging. And, more to the point, it’s owned by a product giant not a data-based advertising giant. WhatsApp’s numbers are interesting. While its penetration in Europe is strong, in the developing world it’s staggering. In Kenya, South Africa, Nigeria, Argentina, Malaysia, Colombia and Brazil it has secured more than 90% of total adult internet users. In most countries, WhatsApp is now the market leader. Think that through when next reading about WhatsApp’s shift into payments and shopping.


Insurance to Mitigate the Risk of AI Systems Coming into View

It’s not clear that AI software suppliers guarantee the accuracy of their algorithms, or that insurance companies cover the risks associated with AI products. Having insurance against AI risk could smooth the path to AI adoption. Among manufacturers trying out AI, many are stuck in “pilot purgatory”–not yet successfully scaling digital transformation. “Greater support for businesses looking to implement new solutions could help to improve the adoption rate,” Yoskovitch stated. Insurers could help enterprises at these three stages of AI adoption, Yoskovitch suggests ... AI failure models are an evolving area of research. “It is not possible to provide prescriptive technological mitigations,” the authors stated. Cyber insurance comes the closest, but is not a perfect fit. If bodily harm occurs because of an AI failure, such as if the image recognition system on an autonomous car fails to perform in snow or frost conditions, cyber insurance is not likely to cover the damage, although it may cover the losses from the interruption of business that results, the authors suggest. 


‘Back to human’: Why HR leaders want to focus on people again

Delivering a great employee experience relies on the same principles used in design thinking for products and services. Like skilled designers, CHROs are starting with the customer and working backward. Where there is a customer journey with its associated pain points, so there are career journeys in every big organization, each with its own identifiable moments of frustration. One thing HR leaders can do along these lines is to harness the energy and insight of their colleagues to increase engagement among new hires and current employees. Cisco, for instance, launched a 24-hour “breakathon” with more than 800 employees that used design-thinking principles to identify the moments that matter most in the interactions between HR and employees. This session led to a complete redesign of onboarding: YouBelong@Cisco, a full prototype solution that targeted common pain points for people starting careers at the company. HR leaders want to use these technologies to help customize and track the needs of each individual on the employee journey, whether that means advancing educational efforts, helping customers and clients to solve problems, supporting the development of colleagues, or simply being part of a great team.


Plea To ML Researchers: Give Data Curation A Chance

Many experts believe data must be used in their natural form to give an unvarnished output. While there is no problem with this argument, Rogers said, it needs more elaboration. “In that case, the “natural” distribution may not even be what we want: e.g. if the goal is a question answering system, then the “natural” distribution of questions asked in daily life (with most questions about time and weather) will not be helpful,” wrote Rogers. She further added there is still a lot of research work that needs to be done before developers can study the world as it is. Some developers feel their data is large enough for their training set to encompass the ‘entire data universe’. Rogers said collecting all data is impossible as it will pose legal, ethical, and practical challenges Meanwhile, many are in favour of developing algorithmic alternatives to data curation. As per Rogers, this is a good possibility; however, having such solutions, in the current scenario, could be a complementary approach to data curation rather than completely replacing it. A few experts believe data curation is part of the process and should not become a task big enough to forget the original purpose of developing a model.


Ultra-high-density hard drives made with graphene store ten times more data

Graphene enables two-fold reduction in friction and provides better corrosion and wear than state-of-the-art solutions. In fact, one single graphene layer reduces corrosion by 2.5 times. Cambridge scientists transferred graphene onto hard disks made of iron-platinum as the magnetic recording layer, and tested Heat-Assisted Magnetic Recording (HAMR) – a new technology that enables an increase in storage density by heating the recording layer to high temperatures. Current COCs do not perform at these high temperatures, but graphene does. Thus, graphene, coupled with HAMR, can outperform current HDDs, providing an unprecedented data density, higher than 10 terabytes per square inch. “Demonstrating that graphene can serve as protective coating for conventional hard disk drives and that it is able to withstand HAMR conditions is a very important result. This will further push the development of novel high areal density hard disk drives,” said Dr Anna Ott from the Cambridge Graphene Centre, one of the co-authors of this study. A jump in HDDs’ data density by a factor of ten and a significant reduction in wear rate are critical to achieving more sustainable and durable magnetic data recording.


Implementing An Effective Intelligent Master Data Management Strategy

Since MDM is not a one-time implementation or cleansing exercise, business owners must own the data along with the business processes from various departments and units. The data governance process implemented must identify, measure, capture, and rectify data quality issues in the source system itself. In order to keep the strategy running, a formal model to manage said data as a strategic resource should comprise detailed business rules, data stewardship, data control, and compliance mechanisms. The governance aspect of data needs to be treated as part of daily responsibilities rather than a one-off initiative for it to be effective and supported by stakeholders or senior management. ... Before diving deep into the MDM implementation process, defining a future roadmap is crucial in showing how later stages will be accomplished, consistent with the strategic objectives of an organization. This ensures that your MDM exercise does not turn into a catastrophic event due to abject failures from structural flaws that corrupt your entire data system. Further, infuse upgrades, conduct regular testing on standard communication interfaces, and set benchmarks to quantify your KPI success, until they are proven to be stable before opening up the gates to the rest of your data stream.


Neuromorphic Chip: Artificial Neurons Recognize Biosignals in Real Time

The researchers first designed an algorithm that detects HFOs by simulating the brain’s natural neural network: a tiny so-called spiking neural network (SNN). The second step involved implementing the SNN in a fingernail-sized piece of hardware that receives neural signals by means of electrodes and which, unlike conventional computers, is massively energy efficient. This makes calculations with a very high temporal resolution possible, without relying on the internet or cloud computing. “Our design allows us to recognize spatiotemporal patterns in biological signals in real time,” says Giacomo Indiveri, professor at the Institute for Neuroinformatics of UZH and ETH Zurich. The researchers are now planning to use their findings to create an electronic system that reliably recognizes and monitors HFOs in real time. ... However, this is not the only field where HFO recognition can play an important role. The team’s long-term target is to develop a device for monitoring epilepsy that could be used outside of the hospital and that would make it possible to analyze signals from a large number of electrodes over several weeks or months.


Hardware buyers are scrambling to find chip shortage work-arounds

Because World Insurance runs most of its operations on a private cloud in their own data center, finding the servers they need to expand their operations is an ongoing battle. Before the chip shortage, they would primarily buy white label servers to add capacity. Now, they, like so many others, are sourcing servers from wherever they can find them. Many manufacturers are in the same boat, said Jens Gamperl, CEO of Sourcengine, an online marketplace for electronic components. Gamperl's customers are scrambling to find chips from any source—regardless of whether or not the supplier and its products have been vetted or not. ... To ensure some sort of quality control, manufacturers are asking Sourcengine to perform those functions. Price gouging also is a big issue. Parts that cost pennies pre-pandemic are now going for thousands of times more. "I came across, four weeks or five weeks ago, a situation where a 50 cent part was offered to us for $41," he said. For large businesses, these increased expenses shouldn't have a noticeable impact on the bottom line given the other expenses like travel went to zero, he said.


Hybrid work: How to prepare for the turnover tsunami

Among the multiple factors at play, according to the Prudential Financial survey, are employee concerns about career advancement. ... Additionally, the wide and rapid acceptance of remote work has opened up new job opportunities to work from anywhere. It's a perfect storm for creating some degree of turnover, says Brian Abrahamson, CIO and the associate laboratory director for communications and IT at the U.S. Department of Energy's Pacific Northwest National Lab. "We used to talk about the impacts of fear, uncertainty, and doubt on people. Add to this the impacts of burnout and isolation and you have a recipe for workforce chaos," Roberts says. "A question every CIO should be asking their people managers is, 'Are the recruiters who are trying to poach our people painting a better picture of a future working with their company than we are of ours?'" The time to start addressing anticipated turnover is now. "If you acknowledge that the risk factors affecting the likelihood of increased attrition in the near term are there, the first recommendation I would make is simple: Accept and prepare for it," says Selective Insurance CIO John Bresney.



Quote for the day:

"If you care enough for a result, you will most certainly attain it." -- William James

Daily Tech Digest - June 05, 2021

The rise of cybersecurity debt

Complexity is the enemy of security. Some companies are forced to put together as many as 50 different security solutions from up to 10 different vendors to protect their sprawling technology estates — acting as a systems integrator of sorts. Every node in these fantastically complicated networks is like a door or window that might be inadvertently left open. Each represents a potential point of failure and an exponential increase in cybersecurity debt. We have an unprecedented opportunity and responsibility to update the architectural foundations of our digital infrastructure and pay off our cybersecurity debt. To accomplish this, two critical steps must be taken. First, we must embrace open standards across all critical digital infrastructure, especially the infrastructure used by private contractors to service the government. Until recently, it was thought that the only way to standardize security protocols across a complex digital estate was to rebuild it from the ground up in the cloud. But this is akin to replacing the foundations of a home while still living in it. You simply cannot lift-and-shift massive, mission-critical workloads from private data centers to the cloud.


Zero trust: The good, the bad and the ugly

Right from the start, the name zero trust has unwelcome implications. On the surface, it appears that management does not trust employees or that everything done on the network is suspect until proven innocent. "While this line of thinking can be productive when discussing the security architecture of devices and other digital equipment, security teams need to be careful that it doesn't spill over to informing their policy around an employer's most valuable asset, its people," mentioned Jason Meller, CEO and founder at Kolide. "Users who feel their privacy is in jeopardy, or who do not have the energy to continually justify why they need access to resources, will ultimately switch to using their own personal devices and services, creating a new and more dangerous problem—shadow IT," continued Meller. "Frustratingly, the ill-effects of not trusting users often forces them to become untrustworthy, which then in turn encourages IT and security practitioners to advocate for more aggressive zero trust-based policies." In the interview, Meller suggested the first thing organizations looking to implement zero trust should do is form a working group with representatives from human resources, privacy experts and end users themselves.


From Boardroom To Service Floor: How To Make Cybersecurity An Organizational Priority Now

Of course, companies don’t just want to identify risk. They want to prevent relevant threats and secure their IT infrastructure. To achieve this, boardrooms, C-suite executives and cybersecurity teams will need to focus on the most potent risks — from insider threats to misconfigured databases — to enhance their defensive posture to meet the moment. This should begin by addressing your in-house vulnerabilities. With so many data breaches caused, in part, by employees, companies can defend data by enhancing their educational and oversight protocols. For instance, employee monitoring that harnesses user behavior analytics can empower companies to identify employees who might be vulnerable to a phishing scam, allowing leaders to direct teaching and training to mitigate the risk. (Full disclosure: Employee monitoring is among my company’s key provisions.) Similarly, cybersecurity software that restricts data access, movement and manipulation can ensure that data is available on a need-to-know basis, reducing opportunities for negligence or accidents to undermine data security.


How Testers Can Contribute to Product Definition

The approach to closing the understanding gap that has proven successful is "listening before talking". In practice, this means meeting the stakeholders, learning about their motivation and goals, building relationships and establishing a collaboration – basically, a feedback loop. Next was to explore the clients’ needs and their user personas by either talking to product manager(s), reading industry-related articles, or analyzing customer data because each user persona has a different goal and therefore a different task to complete in our product. For me, it’s essential to understand these differences to learn what is important to each one of them and aim for the specific quality characteristics when providing feedback on design, user experience, or product requirements. ... Practically, the shorter the feedback loop, the better. To make it shorter, I try to be there when the project starts to kick off and requirements are shaped, or when first prototypes are done, and generally be proactive by asking what’s the next important thing, inviting different stakeholders for pairing and collaborating closely to discover and share important information about the product.

API Security Depends on the Novel Use of Advanced ML & AI

By creating API-driven applications, we have exposed a much bigger attack surface. That’s number one. Number two, of course, we have made it challenging to the attackers, but the attack surface being so much bigger now needs to be dealt with in a completely different way. The older class of applications took a rules-based system as the common approach to solve security use cases. Because they just had a single application and the application would not change that much in terms of the interfaces it exposed, you could build in rules to analyze how traffic goes in and out of that application. Now, when we break the application into multiple pieces, and we bring in other paradigms of software development, such as DevOps and Agile development methodologies, this creates a scenario where the applications are always rapidly changing. There is no way rules can catch up with these rapidly changing applications. We need automation to understand what is happening with these applications, and we need automation to solve these problems, which rules alone cannot do.


Everything You Need To Know About India’s Centre for Artificial Intelligence and Robotics

CAIR is involved in research and development in AI, robotics, command and control, networking, information and communication security, along with the development of mission-critical products for battlefield communication and management systems. CAIR was appraised for Capability Maturity Model Integration (CMMI) Maturity Level 2 in 2014 and has ISO 9001:2015 certification. As part of the Defence Research and Development Organisation (DRDO), robotics was one of the priority areas of CAIR, said V S Mahalingam, former director, CAIR. Mahalingam joined DRDO in 1986 and served in Electronics & Radar Development Establishment (LRDE) till 2000 before he moved to CAIR. “Concentrating on the development of totally indigenous robots, the lab developed a variety of controllers and manipulators for Gantry, Scara, and other types of robots. With the experience gained from these initial years, the lab developed an autonomous guided vehicle (AGV). The expertise in control systems required for robotics was applied to the development of control laws for Tejas fighter,” Mahalingam added.


How do I become a network architect?

For the most part, network architects fall into department management roles overseeing teams of network engineers, system administrators, and perhaps application developers. The goal of a network architect is to design efficient, reliable, cost-effective network infrastructures that meet the long-term information technology and business goals of an organization. The trick is to accomplish those long-term goals while also permitting the organization to meet its short-term business goals and financial obligations. ... Successful network architects must be able to see the big picture regarding current and future information technology infrastructure, not only for the organization but for the industry and general business environment as well. Individuals fulfilling the job role must be able to produce a documented vision of network infrastructure now and in the future. Documentation is important because a network architect must be able to present their vision of current and future network needs and goals to C-level management, employees, and other stakeholders. They must be able to communicate why their vision is correct, and why those stakeholders should provide the resources necessary to bring that vision into fruition.


The Beauty of Edge Computing

The volume and velocity of data generated at the edge is a primary factor that will impact how developers allocate resources at the edge and in the cloud. “A major impact I see is how enterprises will manage their cloud storage because it’s impractical to save the large amounts of data that the Edge creates directly to the cloud,” says Will Kelly, technical marketing manager for a container security startup. “Edge computing is going to shake up cloud financial models so let’s hope enterprises have access to a cloud economist or solution architect who can tackle that challenge for them.” With billions of industrial and consumer IoT devices being deployed, managing the data is an essential consideration in any edge-to-cloud strategy. “Advanced consumer applications such as streaming multiplayer games, digital assistants and autonomous vehicle networks demand low latency data so it is important to consider the tremendous efficiencies achieved by keeping data physically close to where it is consumed,” says Scott Schober, President/CEO of Berkeley Varitronics Systems, Inc. It’s not much of a stretch to view edge as an integral computing of the fast evolving hybrid cloud.


Is STG Building a New Cybersecurity Powerhouse?

The consensus is STG will likely form either a complete new company out of its newly acquired businesses - hoping the sum of the parts will make STG a major player in the security space - or simply allow customers to pull together a security plan on an a la carte basis from STG's various parts. "You can see a future where we're going to have a clash of some really sophisticated industry heavyweights. You're going to have to compete with Microsoft; you're going to have to compete with Cisco. So if you're going to get in a fight with Microsoft and Cisco, you better bring a big stick. And it looks like they've now got a big stick," says Frank Dickson, program vice president at IDC. Peter Firstbrook, vice president and analyst with Gartner, believes STG is putting together a portfolio to deliver a one-stop shopping experience for those looking for a suite of cybersecurity products and solutions to protect their organization. "One trend they could take advantage of is the propensity of buyers to seek out fewer, more strategic vendors that have integrated solutions," Firstbrook says. "Eighty percent of buyers want to consolidate the number of security products and vendors to make their security operations more efficient."


Using Distributed Tracing in Microservices Architecture

Observability is monitoring the behavior of infrastructure at a granular level. This facilitates maximum visibility within the infrastructure and supports the incident management team to maintain the reliability of the architecture. Observability is done by recording the system data in various forms (tools) such as metrics, alerts (events), logs, and traces. These functions help in deriving insights into the internal health of the infrastructure. Here, we are going to discuss the importance of tracing and how it evolved to a technique called distributed tracing. Tracing is continuous supervision of an application’s flow and data progression often representing a track of a single user’s journey through an app stack. These make the behavior and state of an entire system more obvious and comprehensible. Distributed request tracing is an evolutionary method of observability that helps to keep cloud applications in good health. Distributed tracing is the process of following a transaction request and recording all the relevant data throughout the path of microservices architecture.



Quote for the day:

"Every great leader can take you back to a defining moment when they decided to lead." -- John Paul Warren

Daily Tech Digest - June 04, 2021

We’ve all had to learn new ways of leading and managing. But it’s important to keep the company culture alive, and the best workplace cultures are built on a foundation of trust and autonomy. Leaders can inadvertently undermine that by monitoring employee activities too closely and checking in too often. Micromanaging can hurt morale and stifle engagement, creativity, and innovation. So, if you’ve strayed into micromanager mode, it’s time to rebalance your approach. Keep in mind that one byproduct of a remote work schedule is that people may be tackling their workload outside the usual 9-5 schedule. Maybe they’re working later in the evening or earlier in the morning, so they’ll have time to deal with the kids’ schooling. As long as quality work is getting done, does that matter? As a manager, you need to figure out what’s important and get clarity on how changes in work routines affect business goals. Align the company vision with specific business goals and make sure that the way employees complete tasks (and how you interact with your team) support those goals. That’s how you can empower your people and maintain control where it counts without overdoing it.


Ancestry’s DevOps Strategy to Control Its CI/CD Pipeline

We had this DevOps culture of, “You own the code, so you own everything about deploying the code.” It was very much kind of like a startup mentality in terms of how we dealt with teams and DevOps. We had a large, centralized team that handled operations before that. As part of our technological transformation, we went from this large centralized operations team, where you throw your code over the wall and let them deploy it, to “You own your deploys.” In that process, we ended up basically not giving teams a whole lot of direction. ... We’ll get you the rules that you’ll need but the process is up to you. Teams started to share best practices; some teams would adopt other team’s best practices but in that kind of ecosystem there’s a lot of divergent paths you can take in how you deploy your code. That’s exactly what happened to us. We had a very fragmented ecosystem of processes. We started to have a lot of issues with that, which in turn led us to start to create policies but the policies weren’t very enforceable because we didn’t have any insight into how they were being applied in each team’s ecosystem.


Cryptocurrency dealers face closure for failing UK money laundering test

The governor of the Bank of England, Andrew Bailey, has told investors they should be prepared to lose all their money if they dabble in cryptocurrencies. Crypto assets are not covered by UK schemes that help investors reclaim cash when companies go bust. The European Central Bank has compared bitcoin’s meteoric rise to other financial bubbles such as “tulip mania” and the South Sea Bubble, which burst in the 17th and 18th centuries. However, banks including Goldman Sachs and Standard Chartered have launched their own cryptocurrency trading desks to take advantage of their rapid growth. The price of bitcoin has tumbled 40% since hitting all-time highs of more than $64,000 (£45,000) in mid-April. It was trading at $38,706 on Thursday afternoon. Only five crypto asset firms have been admitted to the FCA’s formal register so far. Another 90 firms are being assessed through the temporary permit scheme, which has been extended by nine months to allow the FCA to fully review all of the applications. While a further 51 have withdrawn their applications, some may not be covered by the FCA’s rules to register, meaning not all of them will be forced to shut down.


Conversation about the .NET type system

One thing to remember about the line between CLR and C# concepts is that CLR concepts provide the possibility to make some logic work, and C# concepts provide an interface for actual developers to work with. The C# concepts are an opinionated view on the possible programs that can be written using CLR concepts, and over time, the developers of the C# language have found ways for programmers to more clearly and succinctly represent intent on a fairly regular cadence, while the fundamental capabilities provided by CLR concepts are typically much more slow to evolve. ... Having classes that behave like values has always been possible in C# and there are many types in the framework that already do this. Generally though these classes fall into the category of “data” style objects, Tuple<> for example. It’s not good or bad to do this, it’s instead an exercise in evaluating trade offs: heap vs. stack, cost of passing / returning, etc … In the case of records we wanted to explore classes first because that is what most of the customers who valued records were already using. In future versions of the language we will allow for them to be declared as structs as well though to help customers who need to make different trade off decisions.


A Beginner’s Guide To Intel oneAPI

oneAPI allows data parallelism by leveraging two types of programming: API-based programming and direct programming. Within API-based programming, the algorithm for this parallel application development is hidden behind a system-provided API. oneAPI defines a set of APIs for commonly used data-parallel domains and provides library implementations across various hardware platforms. This enables a developer to maintain performance through multiple accelerators with minimal coding and tuning. ... oneDPL has algorithms and functions to speed up DPC++ kernel programming. The oneDPL library follows the C++ standard library’s functions and includes extensions to support data parallelism and extensions to simplify data-parallel algorithms. ... oneMKL is used for fundamental mathematical routines in high-performance computing and applications. This functionality is divided into dense linear algebra, sparse linear algebra, discrete Fourier transforms, random number generators, and vector math. ... oneDAL helps speed up big data analyses by providing optimised building blocks for algorithms for different stages of data analytics—preprocessing, transformation, analysis, modelling, validation, and decision making.


The growing pains of quantum computing

Large corporations now have the resources and relationships to access machines directly, and those machines are available from IBM, from Honeywell, and from other companies as well. It’s also now possible to subscribe to these machines, because some of the big cloud providers (Amazon Web Services and Azure are two examples) have taken initial steps towards offering what we might describe as quantum processing units alongside regular high-performance computing. Those early access agreements are now available for subscription, sometimes on a daily or even an hourly basis. And then beneath all of that, there is a clutch of start-ups like IQM in Finland, Alpine Quantum Technologies in Austria and Oxford Quantum Computing in the UK that are all on a very steep trajectory. Their processors will be available in a variety of ways. All of this means that a large corporate entity has a variety of ways of accessing quantum processors, and what we do is to pull all of that together. We have two distinguishing features. 


Don’t Let Employees Pick Their WFH Days

One concern is managing a hybrid team, where some people are at home and others are at the office. I hear endless anxiety about this generating an office in-group and a home out-group. For example, employees at home can see glances or whispering in the office conference room but can’t tell exactly what is going on. Even when firms try to avoid this by requiring office employees to take video calls from their desks, home employees have told me that they can still feel excluded. They know after the meeting ends the folks in the office may chat in the corridor or go grab a coffee together. The second concern is the risk to diversity. It turns out that who wants to work from home after the pandemic is not random. In our research we find, for example, that among college graduates with young children women want to work from home full-time almost 50% more than men. This is worrying given the evidence that working from home while your colleagues are in the office can be highly damaging to your career. In a 2014 study I ran in China in a large multinational we randomized 250 volunteers into a group that worked remotely for four days a week and another group that remained in the office full time.


Quantum computing: How should cybersecurity teams prepare for it?

For those organizations not involved in the development of quantum computers, preparatory actions are clear. We must urgently overcome our inability to keep existing computers secure; the quantum computer of the future will be of little use if we fail to break our dependency on legacy technology and poor management practices today. And as quantum computing improves, we must remain in front of our adversaries by leveraging new technology before it is adopted by those who wish to do us harm. ... Quantum computing is far too immature for any immediate real-world application or for us to see the benefits that its theory promises. We can make some educated guesses, though. Peter McMahon, Applied and Engineering Physics at Cornell University, writes of quantum computing capabilities, “We’re trying to find something useful we can do with a near-term quantum computer that would answer a question in quantum gravity, or high-energy physics more generally, that couldn’t be answered otherwise, for instance, can we simulate a model of a black hole on a quantum computer? Would that be useful? We don’t know if we’ll find anything, but it’s very interesting to try.”


Exchange Servers Targeted by ‘Epsilon Red’ Malware

The initial point of entry for the attack was an unpatched enterprise Microsoft Exchange server, from which attackers used Windows Management Instrumentation (WMI) – a scripting tool for automating actions in the Windows ecosystem, primarily used on servers – to install other software onto machines inside the network that they could reach from the Exchange server. It’s not entirely clear if attackers leveraged the infamous Exchange ProxyLogon exploit that was a major pain point for Microsoft earlier in the year. However, the unpatched server used in the attack was indeed vulnerable to this exploit, Brandt observed. During the attack, threat actors launched a series of PowerShell scripts, numbered 1.ps1 through 12.ps1, as well as some that were named with a single letter from the alphabet, to prepare the attacked machines for the final ransomware payload. The scripts also delivered and initiated the Epsilon Red payload, he wrote. The PowerShell scripts use a “rudimentary form of obfuscation” that didn’t hinder Sophos researchers’ analysis but “might be just good enough to evade the detection of an anti-malware tool that’s scanning the files on the hard drive for a few minutes, which is all the attackers really need,” Brandt noted.


How Hasura 2.0 Works: A Design and Engineering Look

Hasura can implement API caching for dynamic data automatically because Hasura’s metadata configuration has got detailed information about both the data models as well as the authz rules that in turn have information about which user can access what data. And this is very useful because, otherwise, developers often need to manually build web APIs that provide data access manually. Moreover, devs need to have deep domain knowledge so that they can also then build caching strategies that recognize what queries to the cache for which users/user groups, using caching stores like Redis to provide API caching. But this is just a part of the problem. The harder bit is cache invalidation. Developers use TTL-based caching to avoid worrying about caching invalidation vs consistency and let the API consumers deal with the inconsistency. Hasura, can, in theory, provide automated cache invalidation as well because Hasura has deep integrations into the sources of data and all access to this data can go through Hasura, or use the data source’s CDC mechanism. This part of the caching problem is similar to the “materialized view update” issue.



Quote for the day:

"Speak softly and carry a big stick; you will go far. -- Theodore Roosevelt

Daily Tech Digest - June 03, 2021

Preparing for the Upcoming Quantum Computing Revolution

The primary challenge to successful quantum computing lies within the technology itself. In contrast to classical computers, a quantum computer employs quantum bits, or qubits that can be both 0 and 1 at the same time, Jagannathan says. Such two-way states give quantum computer its power, yet even the slightest interaction with their surroundings can create distortion. "Correcting these errors, known as quantum error correction (QEC), is the biggest challenge and progress has been slower than anticipated," he says. There's also an important and possibly highly destructive aspect to quantum technology. "In addition to [a] wide range of benefits . . . it is also expected that [cybercriminals] will someday be able to break public key algorithms that serve as a basis for many cryptographic operations, like encryption or digital signatures," says Colin Soutar, managing director and cyber and strategic risk leader with Deloitte & Touche. "It's important that organizations carefully understand what exposure they may have to this [threat] so that they can start to take mitigation steps and not let security concerns overshadow the positive potential of quantum computing," says Soutar


DataOps Goes Mainstream As Atlan Lands Big

Data drives businesses growth and provides valuable insights prior to any conclusive decision making. As the enterprises scale, many challenges surface. For instance, working professionals, including data scientists, analysts, engineers, join in with different skill-sets and tools. Different people, different tools, different working styles – all these lead to a major bottleneck. Business segments are in dire need of data management to create contextual insights, now is the time to improve the quality and speed of data streaming into the organisation and get leadership commitment to support and sustain a data-driven vision across the company. This is where DataOps (data operations) come in handy. For instance, users can integrate their tables from Databricks with Atlan in a series of steps. Initially there are some prerequisites for establishing a connection between Atlan and Databricks Account: Go to the Databricks console and select “Clusters” from the left sidebar; Select the cluster you want to connect with Atlan. The cluster should be in a Running state for the Atlan crawler to fetch metadata from it; Click on “Advanced Options” in the “Configuration” tab.


Ransomware-as-a-service: How DarkSide and other gangs get into systems to hijack data

They're offering a service and they sit somewhere on the darker side of the internet and they offer what's called ransomware-as-a-service. They recruit affiliates or essentially sub-contractors who come in, who use their platform and then attack companies. And in the case of DarkSide, if you actually logged into the infrastructure and take a look at it, which is something we in the research community actively do, they had a very polished operation. They provide technical support for their affiliates who are breaking into companies. They provide monetization controls so that an affiliate can go in and see how much has been paid and what's outstanding and manage the money and all that. They're basically like companies and that's the challenge with ransomware now is it's moved from this sort of opportunistic thing where there were a few criminals scattered around the world doing this, to being these as-a-service operations that basically mean any enterprising criminal can get access to ransomware for, I've seen it for less than $100, and then use that to infect stuff. And obviously at the lower end, you're talking about things that aren't very sophisticated. The problem is it doesn't need to be sophisticated.


3 Methods to Reduce Overfitting of Machine Learning Models

The most robust method to reduce overfitting is collect more data. The more data we have, the easier it is to explore and model the underlying structure. The methods we will discuss in this article are based on the assumption that it is not possible to collect more data. Since we cannot get any more data, we should make the most out of what we have. Cross validation is way of doing so. In a typical machine learning workflow, we split the data into training and test subsets. In some cases, we also put aside a separate set for validation. The model is trained on the training set. Then, its performance is measured on the test set. Thus, we evaluate the model on previously unseen data. In this scenario, we cannot use a portion of the dataset for training. We are kind of wasting it. Cross validation allows for using every observation in both training and test sets. Ensemble models consist of many small (i.e. weak) learners. The overall model tends to be more robust and accurate than the individual ones. The risk of overfitting also decreases when we use ensemble models. The most commonly used ensemble models are random forest and gradient boosted decision trees.


IT’s silent career killer: Age discrimination

There is a widespread misconception in most industries that older employees are not “digital savvy” and are afraid to learn new things when it comes to technology, Miklas adds. “This assumption often results in decisions that can result in being sued for age discrimination, especially when the older worker is passed over for promotion, not hired, or terminated,” he says. One issue that arises more in age discrimination claims than other types of discrimination is an employer’s use of selection criteria for hiring, promotion, or layoff decisions that are susceptible to assumptions about age, says Raymond Peeler, director of the Coordination Division, Office of Legal Counsel at the U.S. Equal Employment Opportunity Commission (EEOC). “For example, an employer making determinations about workers based on ‘energy,’ ‘flexibility,’ ‘criticality,’ or ‘long-term concerns’ are susceptible to employer assumptions based on the age of the worker,” Peeler says. The EEOC is responsible for enforcing federal laws that make it illegal to discriminate against job applicants or employees because of a person’s race, color, religion, sex, national origin, disability, genetic information, or age.


Helium Network combines 5G, blockchain and cryptocurrency

Self-appointed as ‘The People’s Network,’ the existing LoRa-based Helium Network is live with 28,000+ hotspots devices deployed in over 3,800 cities worldwide, and there are 200,000+ hotspot devices on backorder from various manufacturers. Helium aims to take that experience and apply it to a new tier of 5G connectivity that is enabled by the unique CBRS spectrum, 3550 MHz-3700 MHz, which the US Federal Communications Commission has made available on three tiers of access, two of which are open to non-government users. Though the Priority Access level is licensed, General Authorized Access permits open access for the widest group of potential users and use cases. Using gateways from Helium partner FreedomFi, hotspot hosts – including individual consumers – will have the option to earn Helium’s own HNT cryptocurrency, in part by offloading carrier cellular traffic to their 5G hotspots. The FreedomFi Gateways will be compatible with Helium’s existing open-source blockchain and IoT network and will by default act as a Helium hotspot, also mining rewards for proof of coverage and data transfers on the IoT network. ­­


Abu Dhabi could achieve technological sovereignty thanks to quantum computing, says expert

In a panel discussion on whether UAE fintech is going global, Ellen Moeller, head of EMEA partnerships at Stripe, a San Francisco-based company that offers software to manage online payments, said key areas of interest for fintechs included ensuring that transactions were a “very frictionless experience” for consumers. “They’re used to calling a taxi from the touch of a button,” she said. “Why shouldn’t it be so simple when we’re talking about financial services? There’s a lot of opportunity for innovation for fintech. “The final piece is regulators and central banks embracing this innovation. I think we’ve only scratched the surface of fintech innovation and there’s lots more to come.” She added that the UAE “has all the right ingredients” to be a world-class technology and fintech hub, including a deep pool of talent and good investment climate. “We’ve seen the UAE do a remarkable job at fostering fintech,” she added. The region is seeing rapid growth in the number of tech start-ups in a range of fields, according to Vijay Tirathrai, managing director of Techstars, a company in the US state of Colorado, that supports tech start-ups.


A Quantum Leap for Quantum Computing

Quantum computers are expected to greatly outperform the most powerful conventional computers on certain tasks, such as modeling complex chemical processes, finding large prime numbers, and designing new molecules that have applications in medicine. These computers store quantum information in the form of quantum bits, or qubits — quantum systems that can exist in two different states. For quantum computers to be truly powerful, however, they need to be “scalable,” meaning they must be able to scale up to include many more qubits, making it possible to solve some challenging problems. “The goal of this collaborative project is to establish a novel platform for quantum computing that is truly scalable up to many qubits,” said Boerge Hemmerling, an assistant professor of physics and astronomy at UC Riverside and the lead principal investigator of the three-year project. “Current quantum computing technology is far away from experimentally controlling the large number of qubits required for fault-tolerant computing. ...”


Everyone Wants to Build a Cyber Range: Should You?

The most compelling reason for building a cyber range is that it is one of the best ways to improve the coordination and experience level of your team. Experience and practice enhance teamwork and provide the necessary background for smart decision-making during a real cyberattack. Cyber ranges are one of the best ways to run real attack scenarios and immerse the team in a live response exercise. An additional reason to have access to a cyber range is that many compliance certifications and insurance policies cite mandatory cyber training of various degrees. These are driven by mandates and compliance standards established by the National Institute of Standards and Technology and the International Organization for Standardization (ISO). With these requirements in place, organizations are compelled to free up budgets for relevant cyber training. There are different ways to fulfill these training requirements. Per their role in the company, employees can be required to undergo certifications by organizations such as the SANS Institute. 


The biggest diversity, equity and inclusion trends in tech

It’s important to take a look at the hiring strategy, and make sure that it attracts a diverse talent pool. Nabila Salem, president at Revolent Group, commented: “For the tech industry, there is more than just a moral imperative to solve the issue of missing equity. The lack of diversity within the tech sector also compounds upon a very real business challenge for organisations: a lack of available talent. “The consequences of not plugging this skills gap are of great concern: GDP growth across the G20 nations could be stunted by as much as $1.5 trillion over the next decade, if companies refuse to adapt to the needs that tech presents to us. “One way to overcome this is to invest in new, diverse talent to help solve both the skills gap and the lack of representation in tech. New, innovative programs like the Salesforce training provided by Revolent specialise in fuelling the market with the diverse, highly skilled new talent it so desperately needs. “There is an opportunity here, to address the issue of a lack of representation and an overall skills gap, all at once. Companies must be open to the idea that the average applicant is not as homogenous as they think. ...”


Shifting to Continuous Documentation as a New Approach for Code Knowledge

Continuously verifying documentation means making sure that the current state of the documentation matches the current state of the codebase, as the code evolves. In order to keep the docs in sync with the codebase, existing documentation needs to be checked against the current state of the code continuously and automatically. If the documentation diverges from the current state of the code, the documentation should be modified to reflect the updated state (automatically or manually). Continuously verifying documentation means that developers can trust their documentation and know that what’s written there is still relevant and valid, or at least get a clear indication that a certain part of it is no longer valid. In this sense, Continuous Documentation is very much like continuous integration - it makes sure the documentation is always correct, similar to verifying that all the tests pass. This could be done on every commit, push, merge, or any other version control mechanism. Without it, keeping documentation up-to-date and accurate is extremely hard, and requires manual work that needs to be repeated regularly.



Quote for the day:

"Without courage, it doesn't matter how good the leader's intentions are." -- Orrin Woodward

Daily Tech Digest - June 02, 2021

A recurrent neural network that infers the global temporal structure based on local examples

"Every day, we manipulate information about the world to make predictions," Jason Kim, one of the researchers who carried out the study, told TechXplore. "How much longer can I cook this pasta before it becomes soggy? How much later can I leave for work before rush hour? Such information representation and computation broadly fall into the category of working memory. While we can program a computer to build models of pasta texture or commute times, our primary objective was to understand how a neural network learns to build models and make predictions only by observing examples." Kim, his mentor Danielle S. Bassett and the rest of their team showed that the two key mechanisms through which a neural network learns to make predictions are associations and context. For instance, if they wanted to teach their RNN to change the pitch of a song, they fed it the original song and two other versions of it, one with a slightly higher pitch and the other with a slightly lower pitch. For each shift in pitch, the researchers 'biased' the RNN with a context variable. Subsequently, they trained it to store the original and modified songs within its working memory.


Cybersecurity industry analysis: Another recurring vulnerability we must correct

Security tooling is a must-have, but we need to look wider and restore balance to the people component of security defense. Automation is the future. Why should we care about the human element of cybersecurity? Virtually everything in our lives is powered by software, and it’s true that automation is replacing the human elements that were once present in so many industries. It’s a sign of progress in a world digitizing at warp speed, with AI and machine learning hot topics keeping many organizations future-focused. So, why, then, would a human-focused approach to cybersecurity be anything other than an antiquated solution to a technologically advancing problem? The fact that billions of data records have been stolen in breaches in the past year, including the most recent Facebook breach affecting over half a billion accounts, should indicate that we’re not doing enough (or taking the right approach) to make a serious counter-punch against threat actors. Cybersecurity tooling is a much-needed component of cyber defense, and tools will always have a place. Analysts have been absolutely on point in recommending the latest tools in a risk mitigation approach for enterprises, and that will not change.


Researchers Confront Major Hurdle in Quantum Computing

A time crystal is a strange state of matter in which interactions between the particles that make up the crystal can stabilize oscillations of the system in time indefinitely. Imagine a clock that keeps ticking forever; the pendulum of the clock oscillates in time, much like the oscillating time crystal. By implementing a series of electric-field pulses on electrons, the researchers were able to create a state similar to a time crystal. They found that they could then exploit this state to improve the transfer of an electron’s spin state in a chain of semiconductor quantum dots. “Our work takes the first steps toward showing how strange and exotic states of matter, like time crystals, can potentially by used for quantum information processing applications, such as transferring information between qubits,” Nichol says. “We also theoretically show how this scenario can implement other single- and multi-qubit operations that could be used to improve the performance of quantum computers.” Both AQT and time crystals, while different, could be used simultaneously with quantum computing systems to improve performance.


How Ethical Hackers Play An Important Role In Protecting Enterprise Data

Data is an essential asset in the current dynamic setting. The value of data has made big organizations more vulnerable to cyberattacks. But believing that a big company can only suffer from a data breach incident is wrong. In reality, No one is immune to data theft, whether you’re an individual, an SME, a large enterprise, or even a state.  A surer way by which organizations can protect themselves from the possibility of an effective malicious attack is to engage with competent, ethical hackers. It would help if your organization structure had someone who understands how malicious hackers think. In such scenarios, it makes sense to take the help of ethical hackers. Ethical hacking in cybersecurity has its groundwork on data protection. Unlike cybercriminals, ethical hackers operate with the consent of the client. They use the same tools and techniques as malicious attackers. However, cybersecurity and ethical hacking experts intend to protect and secure your network as they can think like the bad guys. They can quickly discover your system vulnerabilities and suggest how you can resolve them before they are exploited.


Microsoft, GPT-3, and the future of OpenAI

There’s a clear line between academic research and commercial product development. In academic AI research, the goal is to push the boundaries of science. This is exactly what GPT-3 did. OpenAI’s researchers showed that with enough parameters and training data, a single deep learning model could perform several tasks without the need for retraining. And they have tested the model on several popular natural language processing benchmarks. But in commercial product development, you’re not running against benchmarks such as GLUE and SQuAD. You must solve a specific problem, solve it ten times better than the incumbents, and be able to run it at scale and in a cost-effective manner. Therefore, if you have a large and expensive deep learning model that can perform ten different tasks at 90 percent accuracy, it’s a great scientific achievement. But when there are already ten lighter neural networks that perform each of those tasks at 99 percent accuracy and a fraction of the price, then your jack-of-all-trades model will not be able to compete in a profit-driven market.


Has DevOps killed the BA/QA/DBA Roles?

As the industry continues towards DevOps and Cloud, these fields will thin out. Each of the roles will trend towards more of a specialization, especially the DBA, since the operational overhead of maintaining a DB is rapidly decreasing. They’ll last longer at big companies, but the tolerance for lower performers will drastically decline. However, simultaneously the demand for data expertise will keep accelerating as shown in the forecast below. Growth in warehousing and data science should ensure data specialization remains lucrative, and DBAs are well-poised to transition. Of the three, the BA role seems safest. The average software developer simply does not have the time (nor often capabilities) to maintain the social network of a strong BA. However, as more companies migrate to DevOps/Agile, the feedback barrier between users and developers will continue to shrink. As it does, BAs that are not technically competent will be pushed out. The QA role is the hardest to predict. As automation improves, demand for QA persons to run manual scripts and “catch bugs” will disappear. 


How to Get a Cybersecurity Job in 2021

There are a bunch of certifications, from CompTIA’s Security+ to others that will help signal your readiness for cybersecurity jobs. Some are more entry-level and require IT competencies such as the A+. But some will require you to have job experience in cybersecurity (such as the CISSP). There’s a bit of a chicken and egg situation and you might wonder – how can you get job experience if you need job experience to get the job in the first place? Adjacent job experience can often make a difference here. Many people transition into cybersecurity from IT roles, such as network administration, system administration, or being on helpdesk for IT, which is an entry-level role. You can gain experience here and transition over. There are also programs tailored for veterans and people with law enforcement backgrounds to get into cybersecurity. Lastly there are many cybersecurity internships being offered to bridge this gap – though with the right backing, training, and the right experience, you can skip ahead to junior-level analyst roles. SOC analyst roles are a good way to break into the cybersecurity industry. Security operations centers need analysts to parse through different threats.


Making A Case For Serverless Machine Learning

The first benefit of serverless machine learning is that it is very scalable. It can stack up to 10k requests at the same time without having to write any additional logic. It doesn’t consume extra time to scale which makes it perfect for handling random high loads. Secondly, with a pay-as-you-go architecture of serverless machine learning a person doesn’t have to pay unused server time. It can save an enormous amount of money. For example, if a user has 50k requests a month, he is obliged to pay only for 50k requests. Thirdly, infrastructure management becomes very easy as a user doesn’t have to hire a special person to look into it, it can be done very easily by a backend developer. For instance, AWS Lambda is one of the most popular serverless cloud services that has these advantages. It lets users run code without managing servers. It obviated the need for developers to explicitly configure, deploy, and manage long-term computing units. Training in Serverless Machine Learning does not require extensive programming knowledge. Basic knowledge of Python, Machine Learning, Linux, and Terminal along with an AWS account is enough to get one started.


How Blockchain Technology Can Benefit the Internet of Things

The distributed aspect of blockchain means that data are replicated across several computers. This fact makes the hacking more challenging since there are now several target devices. The redundancy in storage brought by blockchain technology brings extra security and enhances data access since users in IoT ecosystems can submit to and retrieve their data from different devices, Carvahlo said. Continuing with this example, say the burglar is captured and claims in court that the recorded video is forged evidence. The immutability nature of blockchain technology means that any change to the stored data can be easily detected. Thus, the burglar’s claim can be verified by looking at attempts to tamper with the data, he said. However, the decentralization aspect of blockchain technology can be a major issue when storing data from IoT devices, according to Carvahlo. “Decentralization means that the computers used to store data [in a distributed fashion] might belong to different entities,” he said. “In other words, if not implemented appropriately, there is a risk that users’ sensitive data can now be by default stored by and available to third parties.”


Software Engineering at Google: Practices, Tools, Values, and Culture

The skills required for developing good software are not the same skills that were required (at one point) to mass produce automobiles, etc. We need engineers to respond creatively, and to continually learn, not do one thing over and over. If they don’t have creative freedom, they will not be able to evolve with the industry as it, too, rapidly changes. To foster that creativity, we have to allow people to be human, and to foster a team climate of trust, humility, and respect. Trust to do the right thing. Humility to realize you can’t do it alone and can make mistakes. ... Building with a diverse team is, in our belief, critical to making sure that the needs of a more diverse user base are met. We see that historically: first-generation airbags were terribly dangerous for anyone that wasn’t built like the people on the engineering teams designing those safety systems. Crash test dummies were built for the average man, and the results were bad for women and children, for instance. In other words, we’re not just working to build for everyone, we’re working to build with everyone. It takes a lot of institutional support and local energy to really build multicultural capacity in an organization. We need allies, training, and support structures.



Quote for the day:

"Nothing so conclusively proves a man's ability to lead others as what he does from day to day to lead himself." -- Thomas J. Watson

Daily Tech Digest - June 01, 2021

Microsoft launches first Asia Pacific Public Sector Cyber Security Executive Council

With most technology infrastructure owned and operated by private companies; it is also mission critical that governments form coalitions with leading tech companies to lead effective cyber-defense strategies and safeguard our region against attackers. Dato’ Ts. Dr. Haji Amirudin Abdul Wahab FASc, CEO of CyberSecurity Malaysia shared, “Cybersecurity is an important national agenda that cannot rely solely on the back of IT team. It should be a priority and responsibility of all individuals, as we continue to see cyber-criminal activities rise exponentially with the proliferation of data and digital connectivity. This coalition certainly establish stronger partnerships with industry leaders and practitioners that allow us to fortify our security postures and combat cybercrime.” On the future of the cybersecurity eco-system and role the coalition will play, Ph.D. candidate ChangHee Yun, Principle Researcher of AI/Future Strategy Center, National Information Society Agency Korea added, “the collective intelligence amongst the Asia Pacific nations is paramount to jointly share best practices and strategies that will enable us to resolve cybersecurity challenges at a faster pace, and a more proactive manner. ...”


A look at API prioritisation strategy of ICICI Lombard

Like any other software development, API development and rollout have their own set of challenges. One of the most important challenges is to ensure security and encryption. A robust security framework and periodic security audits of applications are a must in ensuring that not only endpoints of applications are tracked but also there is a sufficient level of encryption and account-level security that is maintained. Detection of vulnerabilities and plugging them is an ongoing affair and needs to be monitored regularly. Data protection is a critical aspect of security that we pay close attention to. According to Nayak, one of the key aspects, where organisations make mistakes, involves the estimation of volumes for integration. Since a lot of the APIs are built keeping the number of users in mind, it becomes extremely important to also estimate user-based rate limits to ensure scalability. “User-based rate limits also help in tracking the number of calls per user and outliers are identified as a part of the security evaluation.


HITRUST explained: One framework to rule them all

To understand how this works, we need to first understand what we mean when we talk about a security framework. This isn't some whiz-bang software tool or hardware appliance; instead, it's a set of policies and procedures meant to improve your organization's cyber security strategies. There are innumerable frameworks available out there, some put out by for-profit companies, some by industry cybersecurity orgs, and some by government agencies. This last category will become important for our discussion: many government regulations that touch on cybersecurity have at their heart prescribed frameworks that companies need to implement in order to be in compliance. HITRUST's framework, known as the HITRUST CSF, works along these same lines. What makes HITRUST special is that it isn't attempting to impose its own unique security philosophy onto its users; rather, it consolidates multiple existing public domain security frameworks into a single document. For instance, plenty of these frameworks require all passwords within an organization to be eight characters or more; therefore, the HITRUST CSF includes an eight-character password requirement for those organizations to which that control applies.


Microsoft's Low-Code Strategy Paints a Target on UIPath and the Other RPA Companies

Microsoft has assembled all of the pieces required by an enterprise to deliver low code solutions. If they execute well on this strategy they are poised to become unassailable in the low-code world. When Microsoft talks about low code, they have a pretty expansive view. The language they use when describing low code encompasses everything from an accountant writing a formula in Excel, to a software engineer using a pre-built connector to pull data from an API, to a consulting firm building a bespoke end-to-end claims management solution for a customer. Microsoft realises that the real challenge with scaling low code is not writing low code applications - it’s deploying and monitoring low code applications. And it is firmly on a trajectory to solving this challenge. ... Microsoft has put together a pretty impressive strategy. I don’t know how much is by design and how much by tactical zigging and zagging but, judging by the dates that the company released each of the pieces in this strategy, it looks like sometime in 2019 someone at Microsoft had a lightbulb moment about how all this should fit together, and they’ve been executing against that strategy ever since.


Are MRI Scans Done By AI Systems Reliable?

Convolutional neural networks are trained to map the measurement directly to an artifact-free image or map from a coarse least-squares reconstruction from the under-sampled measurement to an artifact-free image. The best-performing methods in the fastMRI competition are all trained networks and yield significant improvements over classical methods. Traditional CS methods are trendy in MRI reconstruction, and are used in clinical practice. Untrained networks are also powerful for compressive sensing, and simple convolutional architectures such as the Deep Decoder work well in practice. For the experiments, the researchers picked ten randomly-chosen proton-density-weighted knee MRI scans from the fastMRI validation set. For each of those images, a small perturbation was added to the measurement. The results showed that both trained and untrained methods are sensitive to small adversarial perturbations. For the next experiment to check for dataset shift, the researchers tested on the Stanford dataset retrieved by collecting all available 18 knee volumes. “Our main finding is that all reconstruction methods perform worse on the new MRI samples, but by a similar amount.


A human-centric approach to protect against cybersecurity threats

Teaching and reinforcing positive cyber hygiene among employees is one way in which they can help in defending against cyberattacks. This is the consistent and safe training of employees when they perform a manoeuvre that could compromise important data or open themselves up to a threat. This could be attaching a document with sensitive information to an outside source using a document sharing service or clicking on e-mail without reviewing the source. With practice and consistent guidance, it is possible to train employees with new programmes that help to curb unwanted behaviours, with notifications being made to the employees when one of these incidents is about to occur. The employee can learn in real-time why they cannot or should not perform this action. It can also be a comfort to the employees who know they are protected within this system of alerts, with additional options to anonymise which employee is connected to each incident – in other words, ensuring full visibility while maintaining privacy. With time, these actions will become habits. Human error is always likely to occur, but with incident-based training, employees and companies can better protect themselves from outside risks.


Investing in the Cybersecurity Workforce of Tomorrow

One solution that will help close the skills gap is to seek out and hire underrepresented candidates. However, providing them with the needed educational resources and skill-building opportunities is yet another challenge. Cybersecurity education is not always accessible to these groups, which typically leads them to pursue other career paths. Investing in the preparation of essential talent pools, such as students, is one key component to closing the cybersecurity skills gap. With the crucial need for people with cyber skills, IT recruiters need to consider candidates who don’t fit the traditional mold of a cybersecurity professional. ... Organizations must provide appropriate resources, and candidates must be willing to take advantage of this opportunity. Along with universities that offer cybersecurity curricula, several community organizations recognize the value of diversity in the industry, providing access to content and programs designed to address the talent shortage. ICMCP and WiCyS are two examples of groups that partner with private organizations to create access to different types of training and mentorship programs for women and minorities looking to transition or grow within the field of cybersecurity.


CISO Confidence Is Rising, but Issues Remain

Many CISOs feel they lack boardroom support. Fewer than two-thirds of global CISOs surveyed for the report indicated that they agree with their board's approach to cybersecurity. Fifty-seven percent of them indicated that the expectations placed on their role are excessive. Fifty-nine percent of global CISOs say their reporting line hinders their job effectiveness. This view is most prevalent in the technology sector, where three-quarters of CISOs expressed this sentiment. In the public sector, the issue is less pressing; here, just 38% felt reporting was a burden. The apparent distance between them and their C-suite colleagues makes many CISOs feel they can't do their jobs to the best of their ability. Nearly half of them don't global believe their organization is setting them up to succeed. What's worse, 24% of CISOs strongly agree this is the case. The CISO's ability to trade off agility and security will be even more critical in the future. Now that more organizations know what remote working brings along in terms of cost-savings and flexibility, it's likely that many will adopt hybrid working models going forward. But CISOs will need to convince their boards that the passable approach they used over the past year won't be enough in the long term.


How data centres can help businesses be more sustainable

The first step for many providers is in a move away from fossil fuels. Data centres are particularly well placed to benefit from renewable energy sources due to their stable power consumption. Indeed, some providers are already achieving 100% zero-carbon energy in their buildings, resulting in lower emissions of carbon and other types of pollution, as well as cost efficiencies. Google is another trailblazer in this area – its large-scale procurement of wind and solar power has made Google the world’s largest corporate buyer of renewable energy. Renewable energy is, and will continue to be, an important part of the strategy to reduce carbon emissions, but different global locations will benefit from different approaches, and it’s important to move beyond a straight ‘we must embrace renewables’ message, to one that recognises the nuances of location. For example, in the Middle East and parts of the US, solar energy is much more prevalent than in the Nordics. Other locations have different options: a good example is at a campus on the southwestern tip of Iceland, which runs almost entirely on geothermal and hydroelectric power.


Security leaders more concerned about legal settlements than regulatory fines

Egress CEO Tony Pepper comments: “The financial cost of data breach has always driven discussion around GDPR – and initially, it was thought hefty regulatory fines would do the most damage. But the widely unforeseen consequences of class action lawsuits and independent litigation are now dominating conversation. Organizations can challenge the ICO’s intention to fine to reduce the price tag, and over the last year, the ICO has shown leniency towards pandemic-hit businesses, such as British Airways, letting them off with greatly reduced fines that have been seen by many as merely a slap on the wrist. “With data subjects highly aware of their rights and lawsuits potentially becoming ‘opt-out’ for those affected in future, security leaders are right to be nervous about the financial impacts of litigation.” Lisa Forte, Partner at Red Goat Cyber Security, comments: “The greatest financial risk post breach no longer sits with the regulatory fines that could be issued. Lawsuits are now common place and could equal the writing of a blank cheque if your data is compromised.



Quote for the day:

"It is easy to lead from the front when there are no obstacles before you, the true colors of a leader are exposed when placed under fire." -- Mark W. Boye