Daily Tech Digest - September 17, 2020

Outbound Email Errors Cause 93% Increase in Breaches

Egress CEO Tony Pepper said the problem is only going to get worse with increased remote working and higher email volumes, which create prime conditions for outbound email data breaches of a type that traditional DLP tools simply cannot handle. “Instead, organizations need intelligent technologies, like machine learning, to create a contextual understanding of individual users that spots errors such as wrong recipients, incorrect file attachments or responses to phishing emails, and alerts the user before they make a mistake,” he said. The most common breach types were replying to spear-phishing emails (80%), emails sent to the wrong recipients (80%) and sending the incorrect file attachment (80%). Speaking to Infosecurity, Egress VP of corporate marketing Dan Hoy, said businesses reported an increase in outbound emails since lockdown, “and more emails mean more risk.” He called this a numbers game which has increased risk as remote workers are more susceptible and likely to make mistakes the more they are removed from security and IT teams. According to the research, 76% of breaches were caused by “intentional exfiltration.” Hoy confirmed this is a combination of employees innocently trying to do their job and not cause harm by sending files to webmail accounts, but this does increase risk “and you cannot ignore the malicious intent.”


‘The demand for cloud computing & cybersecurity professionals is on the rise’

The COVID-19 pandemic undoubtedly has disrupted the normalcy of every company across every sector. At Clumio, our primary focus continues to be the health and well-being of our people. While tackling the situation, we also need to keep pace with our professional duties. We made the transition to remote work immediately and are in constant touch with our employees to ensure they don’t feel isolated and remain focused on their work. We are encouraging employees to follow the best practices of remote work and motivating them to spend time on their emotional, mental and physical wellbeing during this time. We conduct Zoom happy hours frequently to stay connected and have fun. As part of the session, we also celebrated a virtual babyshower of one of our colleagues recently. We had our annual summer picnic and created wonderful memories while maintaining social distance, but staying together. During this time, we have also launched the India Research and Development center in Bangalore. Our India Center will drive front-end innovation and research to build cloud solutions. India has a huge talent pool in technology, and it is only growing. We have also started virtual hiring and onboarding during the pandemic. 


AI investment to increase but challenges remain around delivering ROI

ROI on AI is still a work in progress that requires a focus on strategic change. As companies progress in AI use, they often shift their focus from automating internal employee and customer processes to delivering on strategic goals. For example, 31% of AI leaders report increased revenue, 22% greater market share, 22% new products and services, 21% faster time-to-market, 21% global expansion, 19% creation of new business models, and 14% higher shareholder value. In fact, the AI-enabled functions showing the highest returns are all fundamental to rethinking business strategies for a digital-first world: strategic planning, supply chain management, product development, and distribution and logistics. The study found that automakers are at the forefront of AI excellence, as they accelerate AI adoption to deliver on every part of their business strategy, from upgrading production processes and improving safety features to developing self-driving cars. Of the 12 industries benchmarked in the study, automotive employs the largest AI teams. With the government actively supporting AI under its Society 5.0 program, Japanese companies lead the pack in AI adoption. 


The future of .NET Standard

.NET 5 and all future versions will always support .NET Standard 2.1 and earlier. The only reason to retarget from .NET Standard to .NET 5 is to gain access to more runtime features, language features, or APIs. So, you can think of .NET 5 as .NET Standard vNext. What about new code? Should you still start with .NET Standard 2.0 or should you go straight to .NET 5? It depends. App components: If you’re using libraries to break down your application into several components, my recommendation is to use netX.Y where X.Y is the lowest number of .NET that your application (or applications) are targeting. For simplicity, you probably want all projects that make up your application to be on the same version of .NET because it means you can assume the same BCL features everywhere. Reusable libraries: If you’re building reusable libraries that you plan on shipping on NuGet, you’ll want to consider the trade-off between reach and available feature set. .NET Standard 2.0 is the highest version of .NET Standard that is supported by .NET Framework, so it will give you the most reach, while also giving you a fairly large feature set to work with. We’d generally recommend against targeting .NET Standard 1.x as it’s not worth the hassle anymore. 


Fintech sector faces "existential crisis" says McKinsey

After growing more than 25% a year since 2014, investment into the sector dropped by 11% globally and 30% in Europe in the first half of 2020, says McKinsey, citing figures from Dealroom. In July 2020, after months of Covid-19-related lockdowns in most European countries, the drop was even steeper, 18% globally and 44% in Europe, versus the previous year. "This constitutes a significant challenge for fintechs, many of which are still not profitable and have a continuous need for capital as they complete their innovation cycle: attracting new customers, refining propositions and ultimately monetizing their scale to turn a profit," states the McKinsey paper. "The Covid-19 crisis has in effect shortened the runway for many fintechs, posing an existential threat to the sector." Analyzing fundraising data for the last three years from Dealroom, the conulstancy found that as much as €5.7 billion will be needed to sustain the EU fintech sector through the second half of 2021 — a point at which some sort of economic normalcy might begin to emerge. It is not clear where these funds will come from, however. Fintechs are largely unable to access loan bailout schemes due to their pre-profit status.


Artificial Intuition: A New Generation of AI

Artificial intuition is a simple term to misjudge in light of the fact that it seems like artificial emotion and artificial empathy. Nonetheless, it varies fundamentally. Experts are taking a shot at artificial emotions so machines can mirror human behavior all the more precisely. Artificial empathy aims to distinguish a human’s perspective in real-time. Along these lines, for instance, chatbots, virtual assistants and care robots can react to people all the more properly in context. Artificial intuition is more similar to human impulse since it can quickly survey the entirety of a circumstance, including extremely inconspicuous markers of explicit movement. The fourth era of AI is artificial intuition, which empowers computers to discover threats and opportunities without being determined what to search for, similarly as human instinct permits us to settle on choices without explicitly being told on how to do so. It’s like a seasoned detective who can enter a wrongdoing scene and know immediately that something doesn’t appear to be correct or an experienced investor who can spot a coming pattern before any other person.


Attacked by ransomware? Five steps to recovery

Arguably the most challenging step for recovering from a ransomware attack is the initial awareness that something is wrong. It’s also one of the most crucial. The sooner you can detect the ransomware attack, the less data may be affected. This directly impacts how much time it will take to recover your environment. Ransomware is designed to be very hard to detect. When you see the ransom note, it may have already inflicted damage across the entire environment. Having a cybersecurity solution that can identify unusual behavior, such as abnormal file sharing, can help quickly isolate a ransomware infection and stop it before it spreads further. Abnormal file behavior detection is one of the most effective means of detecting a ransomware attack and presents with the fewest false positives when compared to signature based or network traffic-based detection. One additional method to detect a ransomware attack is to use a “signature-based” approach. The issue with this method, is it requires the ransomware to be known. If the code is available, software can be trained to look for that code. This is not recommended, however, because sophisticated attacks are using new, previously unknown forms of ransomware. 


Struggling to Secure Remote IT? 3 Lessons from the Office

To prepare for the arrival of CCPA, business leaders told us they spent an average of $81.9 million on compliance during the last 12 months. Yet despite making investments in hiring (93%), workforce training (89%), and purchasing new software or services to ensure compliance (95%), 40% still felt unprepared for the evolving regulatory landscape. Why? Because the root causes were not addressed. Perhaps their IT operations and security teams worked in silos, creating complexity and narrowing their visibility into their IT estates. Maybe their teams were completely unaware that other departments introduced their own software into the environment. Or more commonly, the organization used legacy tooling that wasn't plugged into the endpoint management or security systems of the IT teams. These are just some of the root causes that keep organizations in the dark and prone to exploits. While the transition to remote work was swift, it has presented businesses with an opportunity to face these issues head-on. As workforces continue to work remotely, CISOs and CIOs now have the chance to evaluate how they effectively manage risk in the long term, which includes running continuous risk assessments and investing in solutions that deliver rapid incident response and improved decision-making.


CTO challenges around the return to the workplace

Every CTO tells us that the digital transformation and change management programmes designed to address the relentless regulatory, competitor, innovation and customer challenges must go ahead as planned, regardless of the pandemic. You may be tackling automating end-to-end electronic trading workflows or creating mobile framework applications. Whatever the focus, hampering the journey towards electronification, firms stumble over the limitations of legacy systems; trading desks still depend on quotes, orders and trades are processed from a multitude of external trading platforms, and inconsistency, lag and gaps all result in costly errors, which are missed opportunities at best, and regulatory reporting breaches and huge fines at worst. In the quest for efficiencies, mitigation of risk, and achieving seamless and future-proofed IT architecture, firms must automate to meet their regulatory obligations and deliver client, management and regulatory transparency. And this hasn’t even touched on achieving an ambition to create end-to-end, freely flowing models of perfectly clean, ordered and well-governed data. Every CTO needs to apply extraction and visualisation layers, and mine the data for valuable insights that can be fed further upstream.


The Case for Explainable AI (XAI)

Despite the numerous benefits to developing XAI, many formidable challenges persist. A significant hurdle, particularly for those attempting to establish standards and regulations, is the fact that different users will require different levels of explainability in different contexts. Models that are deployed to effectuate decisions that directly impact human life, such as those in hospitals or military environments, will produce different needs and constraints than ones utilized in low-risk situations There are also nuances within the performance-explainability trade-off. Infrastructure and systems designers are constantly balancing the demands of competing interests. ... There are also a number of risks associated with explainable AI. Systems that produce seemingly-credible but actually-incorrect results would be difficult to detect for most consumers. Trust in AI systems can enable deception by way of those very AI systems, especially when stakeholders provide features that purport to offer explainability where they actually do not. Engineers also worry that explainability could give rise to vaster opportunities for exploitation by malicious actors. Simply put, if it is easier to understand how a model converts input into output, it is likely also easier to craft adversarial inputs that are designed to achieve specific outputs.



Quote for the day:

"Great leaders go forward without stopping, remain firm without tiring and remain enthusiastic while growing" -- Reed Markham

Daily Tech Digest - September 16, 2020

As The Cloud Transforms Enterprises, Cybersecurity Growth Soars

Moving beyond password vaults and implementing cloud-based Identity Access Management (IAM) and Privileged Access Management (PAM) solutions are proving to be adaptive enough to keep up with cloud-based transformation projects being fast-tracked to completion before the end of the year. Leaders in cloud-based IAM and PAM include Centrify, who helps customers consolidate identities, deliver cross-platform, least privilege access and control shared accounts while securing remote access and auditing all privileged sessions. Enterprise spending on cybersecurity continues to grow despite the pandemic, driven by the compelling cost and time-to-market advantages cloud-based applications provide. Gartner predicts end-user spending for the information security and risk management market will grow at a compound annual growth rate of 8.2% between 2019 and 2024, becoming a $207.7B market in four years. CIOs and their teams say the pace and scale of cloud transformation in the latter half of 2020 make cloud-based IAM and PAM a must-have to keep their cloud transformation initiatives on track and secure.


Azure + Spring Boot = Serverless - Q&A with Julien Dubois

With “lift and shift”, one of the troubles you will have is that you will need to port all your Spring MVC Controllers to become Spring Cloud functions. There are different ways and tricks to achieve this, and you will have some limitations, but that should work without spending too much money. Also, you wouldn’t touch your business code too much, so nothing important should break. Then, an Azure Function works in a very different way from a classical Spring Boot application: you can still benefit from a Hibernate second-level cache or from a database connection pool, but you can easily understand that they will not be as efficient, as their time to live will be much lower. Using a distributed cache will be a huge trouble here. Also, your function can scale up a lot better than your one-node Tomcat server, so maybe your database will not work as well as before, as it wasn’t built for that sort of load. You could use instead of a database like CosmosDB, or a caching solution like Redis, which are two very widely used options on Azure. This is a lot more work, but that’s the only way to get all the benefits from a serverless platform.


The power of feelings at work

Articulating feelings can also bring new understanding to apparent insubordination or poor performance. For example, our firm was once engaged to help an oil refinery stymied by a total lack of response from its employees when it tried to elicit ideas for a cost-reduction effort. To understand what was behind the disengagement, we held focus groups across functions and grades and conducted one-on-one interviews with senior executives. We sought out individuals whom others identified via a questionnaire (e.g., to whom do you go within the organization if you have a problem?) as particularly in touch with employee feelings and asked for their input on what was stopping action. Across the sessions, an articulation of feelings — frustration, fear, and hopelessness — emerged. ... Because feelings are always messengers of needs, the next step is to follow the feeling to the need. Needs are actionable and often multidimensional. Taking action from an understanding of needs sidesteps recrimination and invites compassion. Psychologist Marshall Rosenberg, founder of the Center for Nonviolent Communication, which advocates speaking from needs to navigate conflict, writes, “From the moment people begin talking about what they need, rather than what’s wrong with one another, the possibility of finding ways to meet everybody’s needs is greatly increased.”


Cybersecurity Bounces Back, but Talent Still Absent

Leave it to a global pandemic to disrupt industries many of us have assumed to be stalwart. Companies fortunate enough not to traffic in hard goods are realizing they can survive (and cut significant costs) by moving to work-from-home workforces. This shift, with an estimated 62% of the workforce now working from home, demonstrates the increased need in hiring for cybersecurity personnel required to manage these new business models. At first, this sounds great for the resilience of the cybersecurity sector — but this means the already existent skills shortage for security professionals is about to get a lot worse. The result is that the lines between what have been considered "pure" cybersecurity roles and, well, everything else are becoming blurred. A recent (ISC)² survey shows that many security professionals are being leveraged to support general IT requirements to accommodate different needs for work at home amid the pandemic. That makes sense. Companies need to have the infrastructure in place to support these new remote workers logging in from their home ISPs while also ensuring the security of sensitive data and intellectual property.


Secondary DNS - Deep Dive

Secondary DNS has many use cases across the Internet; however, traditionally, it was used as a synchronized backup for when the primary DNS server was unable to respond to queries. A more modern approach involves focusing on redundancy across many different nameservers, which in many cases broadcast the same anycasted IP address. Secondary DNS involves the unidirectional transfer of DNS zones from the primary to the Secondary DNS server(s). One primary can have any number of Secondary DNS servers that it must communicate with in order to keep track of any zone updates. A zone update is considered a change in the contents of a zone, which ultimately leads to a Start of Authority (SOA) serial number increase. The zone’s SOA serial is one of the key elements of Secondary DNS; it is how primary and secondary servers synchronize zones. Below is an example of what an SOA record might look like during a dig query. ... In addition to serial tracking, it is important to ensure that a standard protocol is used between primary and Secondary DNS server(s), to efficiently transfer the zone. DNS zone transfer protocols do not attempt to solve the confidentiality, authentication and integrity triad (CIA); however, the use of TSIG on top of the basic zone transfer protocols can provide integrity and authentication.


Leadership lessons from Accenture’s CIO

The only certain thing about technology is that what we use today is not what we will use tomorrow. In my senior team, I look for a mindset of flexibility, which not everyone has. In my interviews I assess the candidate’s willingness to rotate to the new and untried, while at the same time protecting our core systems. The second competency stems from the fact that we live in a world of publicity and social media, where people are always watching what you are doing. When you live in this world, and represent your company, you need an identifiable personal brand. I’m not talking about a being a big shiny icon and writing a lot of white papers. It’s about having the confidence in who you are and communicating, in a few simple words, your unique value. My responsibility is to help people recognize their personal brand and to use it as the foundation of their confidence to lead. We put out a capability called ALICE (Accenture Legal Intelligent Contract Exploration), which uses applied intelligence and machine learning for natural language processing across all of our contracts. 


TOGAF and the history of Enterprise Architecture

TOGAF assumes it will be used by companies with many departments and that are hierarchical in terms of structure and decision-making. Thus, the framework maps directly onto these types of organizational structures. For example, a whole chapter is devoted to describing how to establish and operate an Architectural Board. An Architectural Board is a central planning committee that oversees the design and implementation of the Enterprise Architecture. There're chapters about Risk Management, Architecture Partitioning, and Architectural Contracts. Remember, as mentioned earlier, TOGAF has 52 chapters. It's exhaustive. However, while TOGAF is highly structured, it can accommodate other methodologies, such as Agile and DevOps. TOGAF understands it's best for companies that are highly structured organizations, but it also understands that no two companies are alike. TOGAF supports versatility. TOGAF accepts the iterative model, which is a critical component of Agile. In fact, there is a chapter in TOGAF titled "Applying Iteration to the ADM" that discusses the nature and implementation of iteration in the Application Development Model. Also, DevOps can be accommodated by TOGAF.


Renault’s blockchain project aims at vehicle compliance

Deployed in collaboration with IBM, XCEED is based on ‘Hyperledger Fabric’ blockchain technology. It is designed to track and certify the compliance of components and sub-components. The tool looks to enable greater responsiveness and efficiency at a time of what Renault believes is ‘ever-greater regulatory stringency.’ Here, the carmaker is referring to the European Parliament’s new market surveillance regulations which came into effect at the start of September. These rules mean enhanced controls for vehicles already on the market. Therefore, the production chain must adjust its structure to respond to regulations within shorter timeframes. With XCEED, blockchain is used to create a trusted network for sharing compliance information between manufacturers. Each party maintains control and confidentiality over its data, without compromising its integrity, while also increasing security and confidentiality. Testing was conducted at Renault’s Douai plant, with over one million documents archived and a speed of 500 transactions per second.


Blockchain in Mortgages – Adopting the New Kid on the Block

The world is fast progressing within the technology domain and digital services have penetrated to the unexplored corners of life. Digital transformation has become the backbone of companies and industries throughout the planet. Traditional banking and lending which were thriving a couple of years back, are dying a quick death. Post the pandemic the banks and other financial institutions have increased the push on digital channels for acquisition and customer servicing. Agile and scalable lending platforms are cutting acquisition costs and convey in growth in loan volumes. India’s eleven largest banks namely ICICI Bank Ltd, Axis Bank Ltd, Kotak Mahindra Bank, HDFC Bank, Yes Bank, Standard Chartered Bank, RBL Bank, South Indian Bank, etc. have come together to form a consortium for leveraging the blockchain technology. Blockchain solutions will help increase operational efficiency, faster TAT, enhanced customer delight, increase transparency, and authenticity of transactions. Banks are going to be ready to make better credit decisions thanks to the globally shared and distributed network and increased communication between banks. Blockchain will reduce fraud and data manipulation and can deliver better portfolios.


How Microsoft is looking to MetaOS to make Microsoft 365 a 'whole life' experience

Microsoft's highest level MetaOS pitch is that it is focused on people and not tied to specific devices. Microsoft seems to be modeling itself a bit after Tencent's WeChat mobile social/payment app/service here, my sources say. Microsoft wants to create a single mobile platform that provides a consistent set of work and play services, including messaging, voice and video, digital payments, gaming, and customized document and news feeds. The MetaOS consists of a number of layers, or tiers, according to information shared with me by my contacts. At the lowest level, there's a data tier, which is implemented in the Office substrate and/or Microsoft Graph. This data tier is all about network identity and groups. There's also an application model, which includes work Microsoft is doing around Fluid Framework (its fast co-authoring and object embedding technology); Power Apps for rapid development and even the Visual Studio team for dev tools. Microsoft is creating a set of services and contracts for developers around Fluid Core, Search, personalization/recommendation, security, and management. For software vendors and customers,



Quote for the day:

"As a leader, you set the tone for your entire team. If you have a positive attitude, your team will achieve much more." -- Colin Powell

Daily Tech Digest - September 15, 2020

Edge computing’s epic turf war

Edge computing and the IoT — not to mention COVID-19-related production and supply chain disruptions — are already blurring the lines separating the two cultures. IoT devices bring a new level of monitoring and in some cases control over OT systems. Plus, the edge deployments to which those IoT devices connect promise a whole new arsenal of analytics capabilities to crunch the massive amounts of data being produced by OT equipment. Yet many OT organizations see edge computing as duplicative and even potentially harmful. “Selling that value proposition [to OT] is very challenging,” says Jonathan Lang, IDC research manager for its Worldwide IT/OT Convergence Strategies research practice. “They already have legacy wired connections and industrial networking capabilities and SCADA [supervisory control and data acquisition] systems and control systems that serve their needs just fine.” OT leaders also believe that integrating new systems could threaten throughput and reliability, he says. Production requirements are changing rapidly, and equipment may need to be changed very quickly. “When IT starts meddling with their equipment, that translates into a loss in productivity,” Lang says.


Use cases for AI while remote working

“Working from home presents a host of challenges; lower quality equipment, shared working spaces and limited available bandwidth to name a few. Many of these challenges mean that the quality of our audio and video communication is less than ideal. AI is regularly applied to improve the quality of our video in real time and automatically filter out distracting background noises. “Remote working can increase the feeling of being isolated or disconnected from workmates, lead to disengagement and even stress and ill health. Through sentiment and interaction analysis of everyday tools like email, messenger and other collaboration applications, AI systems can provide effective means of measuring employee wellbeing and engagement, highlighting where employees need help. Once problems are identified, AI systems can suggest support resources and activities that can help to make things better.” Lastly, an emerging AI-related method of maintaining operations during remote working has been the use of digital workers. “Digital workers are becoming essential for businesses, optimising something we’ve never been able to before: the bandwidth of employees,” said Ivan Yamshchikov, AI evangelist at ABBYY.


Eliminating Unconscious Bias in Tech Recruitment

First, it is important to realize that where you hire from is as important as how you hire. If you only ever recruit from the same schools or with the same websites, then you are leaving yourself at the mercy of the diversity of those institutions. If you only ever use one route to apply, whether that’s only using recruitment consultants or use a website that is only really accessible through a computer, then again you are limiting yourself to candidates with access to those pathways. Outside of reconsidering talent pools, one way to reduce unconscious bias is through the removal of identifying characteristics—no photos, no names, dates of birth or school and university grades. This can help to an extent, in that decision-makers have to go off the candidates’ experience and how they’ve presented their previous roles, but it does have its restrictions—with entry-level positions, for example, where relevant experience might be limited. That’s a quick-fix solution. For a more sustainable, thorough and long-term approach, recruiters—particularly those hiring for technology positions—need to look at how they can accurately judge skill sets. This can be challenging in itself—in larger organizations, those involved in early stage selection may not have the requisite background understanding to make the right choices.


Ad Fraud: The Multibillion-Dollar Cybercrime CISOs Might Overlook

The practice became significantly more widespread when the scammers began leveraging networked bots to create fake clicks on sites they own or ads they've paid for, and now also encompasses hidden ads, targeting ad networks which measure views not clicks; click hijacking, when the fraudster redirects a click from one ad to another; and fake apps, which look like and are labeled as legitimate apps. These techniques are often used simultaneously to victimize companies, making the fight against ad fraud even more complex, says Luke Taylor, the chief operating officer of adtech security company TrafficGuard, which coauthored the report with Juniper. So Tayler believes that at the very least, CISOs should use lessons from the cybersecurity world to encourage their employers to become more engaged with the ad fraud challenge. A lot of ad fraud is based on making fake traffic look real, and the way that fraudsters do that is by stealing traffic logs to mimic them and create authentic-looking but fake traffic. CISOs, Taylor says, should be protecting their logs from cybercriminals the way they protect financial data.


Tech Conference Diversity: Time to Get Real

Some tech conferences provide non-traditional amenities such as sessions geared towards women (72%), a mothers' room (56%), a conference hosted meetup (28%), on-site daycare (17%) or a childcare stipend (11%). According to Classon, childcare stipends tend to be offered to people to encourage attendance, although they should also be offered to speakers who have not been offered a speaker stipend. "Part of the industry's problem is that organizers look at providing amenities, like a designated mothers' room or childcare stipend, as an extra bonus of their event when these should have been looked at as table stakes to level the playing field for more women," said Classon. "The same argument can be made for religious observances and the need for designated spaces for worship at weeklong or multi-day conferences." Tech conferences also tend to suffer from design bias as evidenced by the use of stools and chairs on stage that can make wearing a dress or skirt uncomfortable for the speaker and the audience. "Replacing bar stools with chairs that are lower to the ground makes it more comfortable for everybody, frankly," said Classon. "Organizers should also consider swapping out the common clip-on microphone that is difficult to attach to women's clothing for a headset that can rest behind the speaker's ear."


Portland becomes first city to ban companies from using facial recognition software in public

"This is the first of its kind legislation in the nation, and I believe in the world. This is truly a historic day for the city of Portland." Debate over facial recognition software continues to rage after a summer of high-profile news related to the technology. Amazon, IBM, and other major tech companies decided in June to put a moratorium on all police department use of facial recognition software after years of studies showing almost all of the available tools have high error rates and specifically cannot identify the faces of people with darker skin. Later that same month, the ACLU revealed that it was representing a man from Detroit who was arrested in front of his wife and kids based on a mistake by the Detroit Police Department's facial recognition software. Just a day later, US Senators Ed Markey and Jeff Merkley announced the Facial Recognition and Biometric Technology Moratorium Act, which they said resulted from a growing body of research that "points to systematic inaccuracy and bias issues in biometric technologies which pose disproportionate risks to non-white individuals." As with the other recent developments related to facial recognition, experts both praised and criticized the Portland ban.


Wi-Fi 6 explained: Speed, range, latency, frequency and security

Wi-Fi 6 technology enables the fastest wireless networks to date, with theoretical maximum speeds of around 10 Gbps versus Wi-Fi 5 Wave 2's peak data rates of around 7 Gbps. Experts caution, however, that max wireless speeds are typically unattainable except in perfect, laboratory-like conditions. That's why Wi-Fi 5 -- advertised as "gigabit wireless" -- failed to smash the gigabit barrier in actual practice, according to Zeus Kerravala, founder and principal analyst of ZK Research. "If you were sitting at your desk by yourself and no one else was attached to the network, you might have gotten a gigabit of connectivity from Wave 2," Kerravala said. "But I haven't talked to any company that did." He expects Wi-Fi 6 will be the first standard to consistently exceed 1 Gbps in real-world implementations. But more important than its raw increase in throughput, experts agreed, is Wi-Fi 6 technology's efficiency gains, which result in higher capacity and lower latency overall. "Wi-Fi 6 is not as much about getting better device performance," independent analyst John Fruehe said. "It really does a better job of managing larger numbers of clients at the router level."


Open Source Security's Top Threat and What To Do About It

It was easier for organizations to understand and control their use of open source software 10 or 20 years ago, when a smaller pool of commercial open source vendors licensed their software to customers, understood everything about the code, and handled security patching. Now, however, developers draw from a massive array of smaller projects they find on GitHub or share with each other. That, after all, is the beautiful thing about open source — developers no longer have to struggle with bad tools or reinvent software wheels when they can easily benefit from the community's freely available contributions to tackle just about any development need. When they do so, however, they seldom examine what's under the hood — the source code and its dependencies. Can they really trust the code? Does the party who created it stand ready to pinpoint and disclose any security flaws? Is there even someone to contact? A single application can have 10 runtimes and 100 other packages. How can you be confident all are up to date from a security perspective? This fragmentation is the No. 1 open source security threat for enterprises, and it may help explain why Common Vulnerabilities


Why problem solving using analytics needs new thinking

There are parallels for this evolution. There was a time when building a website meant learning to write extensive lines of code. This eventually evolved to a partial self-service model via open-source software, and now the prevalence of simple drag-and-drop features allow anyone with an idea to create a personalised website. As with the development of web design, APA platforms now allow users to get to the creative stage – or the ‘thinking stage’ – sooner. It leapfrogs the mundane tasks of sourcing, cleaning and organising data. The equivalent of web design’s user-friendly drag-and-drop features are the hundreds of building blocks that jump-start the process of creating useful analytic models. Through a unified method of managing data analytics, automating business processes and elevating employees to spend their time on more strategic solving, APA reshapes the way companies generate data-driven insights and act on them. This enables upskilled employees in all parts of the business to ask hard questions and obtain swift answers without always relying upon the advanced skills of data experts. By replacing a range of cumbersome point solutions with one platform that sits across the entire analytic journey, APA also enables anyone in any organisation to build predictive models and use predictive data analytics to drive quick wins.


From Monolith to Event-Driven: Finding Seams in Your Future Architecture

The CQRS pattern strongly suggests that it is about the segregation of commands and queries, but realizing that there is a difference between 1) asking for the state of a system and 2) asking the system to change its state is more fundamental than the separation itself. In fact, you’ll find many variants of CQRS implementations ranging from logical to physical. Combining EDA with the CQRS pattern is a natural increment of the system’s design because commands are the generators of events. With CQRS and commands, the migration of data during the transitional state of an architecture provides a seam by which, once the migration is over, can be removed. This seam will be covered in more details in the Data Migration Seams section. Event sourcing a system means the treatment of events as the source of truth. In principle, until an event is made durable within the system, it cannot be processed any further. Just like an author’s story is not a story at all until it’s written, an event should not be projected, replayed, published or otherwise processed until it’s durable enough such as being persisted to a data store. Other designs where the event is secondary cannot rightfully claim to be event sourced but instead merely an event-logging system.



Quote for the day:

"Leadership does not always wear the harness of compromise." -- Woodrow Wilson

Daily Tech Digest - September 14, 2020

The 5 Biggest Technology Trends In 2021 Everyone Must Get Ready For Now

In recent years we have seen the emergence of robots in the care and assisted living sectors, and these will become increasingly important, particularly when it comes to interacting with members of society who are most vulnerable to infection, such as the elderly. Rather than entirely replacing the human interaction with caregivers that is so important to many, we can expect robotic devices to be used to provide new channels of communication, such as access to 24/7 in-home help, as well as to simply provide companionship at times when it may not be safe to be sending nursing staff into homes. Additionally, companies finding themselves with premises that, while empty, still require maintenance and upkeep, will turn to robotics providers for services such as cleaning and security. This activity has already led to soaring stock prices for enterprises involved in supplying robots. Drones will be used to deliver vital medicine and, equipped with computer vision algorithms, used to monitor footfall in public areas in order to identify places where there is an increased risk of viral transmission.


New BlindSide attack uses speculative execution to bypass ASLR

Speculative execution is a performance-boosting feature of modern processors. During speculative execution, a CPU runs operations in advance and in parallel with the main computational thread. When the main CPU thread reaches certain points, speculative execution allows it to pick an already-computed value and move on to the next task, a process that results in faster computational operations. All the values computed during speculative execution are discarded, with no impact on the operating system. Academics say that this very same process that can greatly speed up CPUs can also "[amplify] the severity of common software vulnerabilities such as memory corruption errors by introducing speculative probing." Effectively, BlindSide takes a vulnerability in a software app and exploits it over and over in the speculative execution domain, repeatedly probing the memory until the attacker bypasses ASLR. Since this attack takes place inside the realm of speculative execution, all failed probes and crashes don't impact the CPU or its stability as they take place and are suppressed and then discarded.


Five things businesses need to think about when implementing AI

The phrase “first time right” rarely applies to implementing AI, and that is especially true in making predictions and forecasts. Achieving an acceptable level of accuracy could take a number of iterations and continuous course corrections. Failure, then, must happen fast in order to learn what to correct. Since the stakes are high and there is always a risk of failure, it is also important to start with a smaller problem or a subsection of a large problem. This helps to reduce the risk associated with the cost of failure. There is no shame in dropping an idea and rethinking the approach. In fact, that willingness to rethink is vital. If the viability of a solution is in doubt, persisting with it – and, by so doing, wasting time and money – is never the right way to go. It is always advisable to change course or, in some cases, drop the idea altogether and pick a new one. Once a smaller problem is resolved and the business can see its value – and associated ROI – the solution can be scaled up to solve a bigger problem. ... IT and AI projects are inherently different. IT projects proceed with a clear idea and a set target for the desired output from day one. AI, by contrast, is mostly used in the quest to understand the unknown. It is therefore impossible to know what the output will be ahead of time.


Edge computing: The next generation of innovation

Edge computing may be relatively new on the scene, but it’s already having a transformational impact. In “4 essential edge-computing use cases,” Network World’s Ann Bednarz unpacks four examples that highlight the immediate, practical benefits of edge computing, beginning with an activity about as old-school as it gets: freight train inspection. Automation via digital cameras and onsite image processing not only vastly reduces inspection time and cost, but also helps improve safety by enabling problems to be identified faster. Bednarz goes on to pinpoint edge computing benefits in the hotel, retail, and mining industries. CIO contributing editor Stacy Collett trains her sights on the gulf between IT and those in OT (operational technology) who concern themselves with core, industry-specific systems – and how best to bridge that gap. Her article “Edge computing’s epic turf war” illustrates that improving communication between IT and OT, and in some cases forming hybrid IT/OT groups, can eliminate redundancies and spark creative new initiatives. One frequent objection on the OT side of the house is that IoT and edge computing expose industrial systems to unprecedented risk of malicious attack. 


5 SMART goals for a QA analyst

Software testers need a basic knowledge of these programming language staples for continued career growth. Successful execution of manual tests and automated scripts is helpful, but testing activities only go so far. It's even more important to know the conditions under which data enters into one of the programming structures, and what must happen for that data to exit it. Let's start with if-then-else logic. In this structure, if is whether a condition exists. If it does exist, then execute the then function. Otherwise, execute the else function (or do nothing). The if-then-else structure works well when a condition is true or false. A case structure might be appropriate, when a condition falls into one slot in a range of possibilities. A case structure expands on if-then-else by providing multiple functions to execute if certain conditions exist. For example, an if-then-else structure might check if a number falls in a range between 2-10 and, if it does, then the number is multiplied by five. If the number is not in that range, it will fall into the else condition, and is not multiplied at all. A case structure specifies what to do when a number falls into one of many ranges. In this example, when a number is between 2-10, it falls into Case A and is multiplied by five.


Q&A on the Book The Art of Leadership

Given leadership is a career restart, there are daily mistakes. The one I see the most with engineering leaders (but I suspect it applies to all leaders) is the tendency to regress when the stakes are highest. It’s when a new manager thinks he or she is helping during crunch time by helping finish the feature, fixing bugs, or otherwise regressing to their prior role because they think they are helping. Let’s catalog the reasons they aren’t helping: They’ve put the team in a situation where they appear to be unable to complete the necessary work. Bad planning; They’re doing the work their team should do, so they’re sending unintentional signals to the team that they don’t believe the team can do the work. Bad signal.; They’re not giving the team the chance to rise to the occasion. To figure out a creative means to complete the work. This might be impossible because of bad planning, but assume it’s not. What does the team think when the leader keeps saving the day by fixing bugs? It’s a safety net, sure, but it’s a net that isn’t allowing others to grow. Leaders often rationalize this behavior as “I want to remain technical.” I want engineering leaders to be deeply technical, too. 


What A Remote Workforce Means For Innovation

Though managers might periodically have doubts about remote workers’ productivity, it’s vital for companies to create a culture of trusting their employees to complete their work and continue innovating even when they’re off-site, Strawmyer added. As personnel continue to work from home, companies might need to reevaluate how they assess productivity. After all, spending a lot of time in the office isn’t an effective productivity indicator. Employees in the office environment are just as, if not more, capable of wasting time, Strawmyer said. In the beginning of the pandemic, workers were focused on getting used to their remote workflow. But now that they are adjusting to the current working conditions, Bang anticipates that more innovation will follow. Depending on their unique situation, removing a lengthy commute or wrapping up other projects has freed some employees up to focus on long-term ideas, he added. However, while working from home does free up time, there’s a big difference between transitioning to remote work under normal circumstances and shifting to remote work during a pandemic, Strawmyer acknowledged. 


5 Best Techniques for Automated Testing

To obtain better results through automated testing, testing must be started earlier and ran frequently as required. The sooner QA team get engaged in the project life cycle, the better you test, the more glitches and anomalies you find. Automated unit tests can be executed on first day and then the tester can progressively build their automated test suite. On the flip side, detection of bugs on early phase is cost lesser to fix than those identified later in deployment or production. Hence, with the shift left movement, proficient testers and software developers are now empowered to create and run tests. The significant automated testing tools allow users to carry out functional user interface tests for desktop and web apps easily using preferred IDEs.  ... Automation can’t replace manual testers. Automation tests represent executing some tests more frequently. The expert tester has to start few by attempting the smoke tests first and afterward cover the build acceptance testing. After that they can move onto recurrent performed tests, then onto the time taking tests. Besides this, QA tester has to ensure every test they automate, should saves time for a manual tester to concentrate on other vital things.


Is low-code/ no-code the dark horse in enterprise technology for a Covid-19 afflicted world?

Tthe adoption of low-code by enterprises is still in its infancy on account of a host of issues which range from technology complexity, vendor lock-in, or maybe even a basic lack of understanding of how low-code functions. “Scaling automation is a typical problem and many companies could have invested in licenses not knowing how to leverage its potential. Secondly, enterprise automation can seem very complex if you’re not using the right tools and technologies,” Persistent’s Dixit said. “To ensure the success of any project, execution requires the right mix of domain, tech and skilled consultants,” he added. LeapLearner’s Ranjan sees a lack of integration using APIs as a hindrance. “This limits the ability to create applications and systems which can solve complex problems and are AI enabled. Hopefully as the low-code/ no-code eco system grows, this will change,” he said. But some low-code platform startups, such as Bengaluru based Mate Labs, are working towards creating a low-code environment that would be able to address these challenges.


How to approach Agile team organization

Forming a productive Agile team requires a significant commitment of individual energy, time and concentration from each member. When Agile teams first come together, they go through what psychologist Bruce Tuckman termed the stages of forming, storming, norming and performing. At first, everyone must figure out the team dynamics. After a team forms, it establishes hierarchies of decision-making and leadership. Everyone finds their place and function within the team. This process takes time, as team members find their niche within the group's overall normal. In the normalizing phase, the team functions smoothly, with less arguing or trying to rise above the others. The group becomes a team rather than a collection of co-workers. When a team stabilizes, it's a sign that members learned how to optimize their skills within working relationships. The most successful Agile teams develop bonds that allow them to be productive and interact on a personal, genuine level. When project managers switch out team members, team development starts all over again. When a stable team is disrupted, it must go through the forming, storming, norming, performing process all over again.



Quote for the day:

"Leadership is an opportunity to serve. It is not a trumpet call to self-importance." -- J. Donald Walters

Daily Tech Digest - September 13, 2020

The Importance Of Predictive AI In Cybersecurity

The utilization of AI systems, in the realm of cybersecurity, can have three kinds of impact, it is constantly expressed in the work: «AI can: grow cyber threats (amount); change the run of the mill character of these dangers (quality); and present new and obscure dangers (quantity and quality). Artificial intelligence could grow the set of entertainers that are fit for performing noxious cyber activities, the speed at which these actors can play out the exercises, and the set of plausible targets. Fundamentally, AI-fueled cyber attacks could likewise be available in more powerful, finely targeted and advanced activities because of the effectiveness, scalability and adaptability of these solutions. Potential targets are all the more effectively identifiable and controllable. In a mix of defensive techniques and cyber threat detection, AI will move towards predictive techniques that can identify Intrusion Detection Systems (IDS) pointed toward recognizing illegal activity within a computer or network, or spam or phishing with two-factor authentication systems. The guarded strategic utilization of AI will likewise focus soon on automated vulnerability testing, also known as fuzzing.


Top 9 ways RPA and analytics work together

Increasingly, organizations are using robotic process automation in analytics tasks from assembling data spread across the company to analyzing how business processes work and how they can improve. "RPA is helping streamline the processes that create valuable insights, changing what areas analytics are measuring and helping to find new domains of time-consuming tasks to focus on," said Michael Shepherd, engineer at Dell Technologies Services. When it comes to RPA and analytics, the automation tool should be a complement to, rather than a replacement for, an integrated data platform across the company, said Jonathan Hassell, content director for data and AI at O'Reilly Media. When companies lack an integrated data platform, it makes analytics more difficult overall. "RPA can help in some ways, but the potential for RPA to unlock insights in data and further output and processes requires a good data platform with great health and hygiene," he said. Hassell recommended organizations look at three key ways RPA can change analytics. First, it helps create better data from the outset. Second, the organization can deploy it in the context of machine learning to sift through large quantities of data and identify useful information for humans to look at.


Rethinking AI Ethics - Asimov has a lot to answer for

AI per se is neither ethical or not, it is how it is applied by practitioners. But if an AI model is introduced into the market that routinely disrupts ethical concepts, what is it? A neutrally ethical artifact produced by unethical people? Think about the judicial system COMPAS that routinely made bail and sentencing recommendations two- to three-times more severe for African-Americans. That is clearly an unethical AI application. Instead of teaching people about the ethics of Plato, and Aristotle and Kant and trying to draw a line from that to build AI applications, a better approach is to start with identifying developers who are simply good people. People who show forbearance, who have the backbone to resist their organization's directive to deliver wrong things. People who have prudence can look beyond the results of a model and project how it will affect the world. Only good people should develop AI. A quick search turns up over a hundred AI Ethics proclamations from governments, non-profit special interest groups, government committees, etc, and it's consuming too much energy and bandwidth. In fact, the cynical view is that all of this is just a way to avoid authorities from issuing rules for AI.


Designing for Privacy — An Emerging Software Pattern

The first thing to do is to separate, and consider differently, the user data that your system needs to “know” from the user data that the system can collect and treat without actually having access to it. We call those two categories: Known and Unknown User Data. There is no magic recipe for this separation as it depends on your system’s functionality. There are systems where all user data can be considered Unknown — where the system has no knowledge of the user to whom it delivers its functionality. Yet, most systems need to identify the user in order to make him pay for a service that they deliver. A ride sharing platform might want to consider the identity of riders and drivers “known” but the destination of the ride and any messages exchanged between drivers and riders “unknown”. A hotel booking platform might perform the split in such a way to connect users with hotels, get a fee from hotels, but ignore the dates of the booking that reveal the user’s whereabouts. Once you segment which user data to treat as “known” and which as “unknown” you can adopt a new flavor of client-server architecture — the one when you treat the “known” data as you normally would, but where the “unknown” user data is kept at the user’s endpoint; 


Blockchain may break EU privacy law—and it could get messy

This technology’s transparency and immutability, meaning it cannot be edited later, is one of its biggest selling points. It’s also why massive enterprises have been drawn to using it. But if someone wanted to invoke their right to be forgotten—and order entries about themselves to be erased from the blockchain—networks may be duty bound to comply under court order. Here’s the kicker: in some cases, it may be near impossible to obey these orders because of the sheer levels of computing power required to edit a blockchain. And in others, the decentralized nature of some networks may mean it’s impossible to pin down someone who can be held responsible for fulfilling the court order. As part of her research, Dr Wahlstrom looked at a variation of blockchain technology that is known as Holochain. She concluded that it could be more compatible with the “right to be forgotten” legislation because of how its distributed database breaks the blockchain up—meaning it is easier for a smaller node to prevent contested data from being reshared. “This allows individuals to verify data without disclosing all its details or permanently storing it in the cloud,” she added.


RBI seeks exemption from data protection law

The banking regulator has also gone a step further and suggested that instead of the Central government, “sectoral regulators be given the power to classify personal data as critical.” Any critical data, according to the proposed act, can be processed only in India. Objecting to classification of financial data as sensitive personal data , RBI’s note maintained that this would lead to higher compliance and explicit consent, which “would translate to increase in costs for providing services to customers. Financial inclusion efforts rely on lower service charges for offering basic banking services. The increase in costs would compel banks to increase the charges associated with offering banking services.” RBI’s note also pointed out that countries such as the UK, France, Germany and Italy do not make such a classification. Privacy experts said the RBI cannot legally claim an exemption from the obligations that stem from the 2017 Supreme Court ruling as the Puttuswamy judgment, which upheld privacy as a fundamental right for Indian citizens. “What the bill does is flesh out that right in terms of the actual actions that need to be done. 


Digital transformation takes more than the wave of a wand

Championing digital transformation requires more than a magic wand or “plug and play” approach. Even the best technology is virtually worthless if everyone isn’t able, available and on board to use it. Without the proper talent, employee training and company-wide desire to evolve in place, people will inevitably revert to their old ways, using antiquated and siloed tools like Excel. Plus, the systems you already have in place have to keep running smoothly as you roll out digitization plans. Digital transformation does not happen with the flip of a switch. It requires ongoing strategic efforts to create a balance among new technologies, strategic solutions and traditional systems. This is why the aforementioned culture shift is essential before starting your transformation journey. As Raconteur author Ben Rossi says in this “Digital innovation and the supply chain” report, “A top-down mandate from board level to drive supply chain transformation is critical to getting the rest of the company to collaborate and change their mindset. Without that, heads of supply chain will run into resistance to change, in turn reducing the chance of achieving the broader transformation goals.” 


Genetic Algorithms: a developer perspective

There are various interesting theories regarding convergence in evolutionary algorithms, but these are of no concern to us here. Our interest is in understanding how these algorithms may be used to solve Artificial Intelligence problems, rather than in understanding why they actually work. One important class of evolutionary algorithms used in practical applications is genetic algorithms: these stress the importance of the data representation used to encode possible solutions to our optimization problem. The class name is inspired by an analogy with genetic code – the material that encodes our ‘phenotype’ or physical appearance. The use of the adjective ‘genetic’ reflects the fact that evolving solutions are represented by data structures, usually strings, reminiscent of biological genetic code. ... The goal of a genetic algorithm is to discover a phenotype that maximises fitness, by allowing a certain population to evolve across several generations. The next question is: how does the evolution of individuals happen? Genetic algorithms apply a set of ‘genetic operations’ to chromosomes of each generation to allow them to reproduce and, in the process, introduce casual mutations, much as occurs in most living beings.


The Story of Data — Privacy By Design

Every byte of data has a story to tell. The question is whether the story is being narrated accurately and securely. Usually, we focus sharply on the trends around data with a goal of revenue acceleration but commonly forget about the vulnerabilities caused due to bad data management. Data possesses immense power, but immense power comes with increased responsibility. In today’s world collecting, analyzing and build prediction models is simply not enough. I keep reminding my students that we are in a generation where the requirements for data security have perhaps surpassed the need for data correctness. Hence the need for Privacy By Design is greater than ever. ... Until recently businesses have focused on looking at data over long stretches of time, made possible by Big Data. With the advent of Internet of Things (IoT) analyzing real-time data has gained immense importance. It is very common these days to have devices in our homes that collect personal data and transmit it to external locations for either monitoring or analytical purposes. In many cases the the poor consumer is finding it difficult to balance the benefits they get from surrendering their personal data against the risk involved with providing them.


Enterprise Data Literacy: Understanding Data Management

Sandwell believes that the Data Literacy problem stems from specialized information needs and a lack of shared context. He remarked: “Data Literacy affects all organizational levels. Everyone uses data for different reasons, including senior managers and the Chief Data Officer (CDO). The CDO tends to come from the business side and takes that perspective. However, he or she may have a steep learning curve about making technical infrastructure ready to serve and deliver.” On the technical side, workers have a good data inventory; however, they have less of an understanding of what the data contents mean to the business. Meanwhile the more data literate data scientists and business analysts put business and technical information together faster, with more direct data querying and manipulation. So, across the enterprise, everyone has a different Data Literacy perspective and talks at cross purposes to one other. Add to the situation various data maturity levels across departments and enterprises. Some ask about “the data on-hand, where to access it, and how it gets used and by whom.” Others have figured out these basics and have different questions on how to do Metadata Management and create a data catalog of all the data sets. Since everyone has different data requirements at different times, getting to a uniform Enterprise Data Literacy remains elusive.



Quote for the day:

"Leading people is like cooking. Don_t stir too much; It annoys the ingredients_and spoils the food." -- Rick Julian

Daily Tech Digest - September 12, 2020

Women in Fintech: How Open Banking Can Help Address Data Bias

A disturbing recent example is the story of Jamie Heinemeier Hansson, who was granted permission to borrow 20 times less on her Apple Card than her husband David was. This was despite her having a better credit score, as well as the couple filing a joint tax return and having an equal share in their property. The Apple Card incident highlighted that computers are not impartial. Artificial intelligence may well be able to digest vast amounts of information and identify patterns far beyond the capability of humans, but the historical data from which such systems “learn” in order to draw conclusions can be biased, even if it is unintentional. So a system can make a discriminatory decision about a woman’s credit rating due to inherent bias in its training – for example, as women were less likely to have been granted credit, the algorithm continues that pattern – despite having not specifically asked her gender. However, many believe that while technology can perpetuate these biases, it could also be used to address them, particularly in the open banking era. “I genuinely believe technology can level the playing field fundamentally,” says Sam Seaton, CEO of Moneyhub. 


Simplify agile, devops, and ITSM with Jira automations

Jira automations work like other IFTTT algorithms, except they have access to all the underlying data and workflows within Jira Software. A Jira automation trigger can be one of several types, including Jira issue types, sprints, and versions. You can design automations for when team members add or modify Jira issues, when scrum masters start or complete sprints, or when team leads create, update, or release versions. These triggers are highly useful for scrum masters, product owners, and technical leads who want to simplify the work needed to keep Jira updated with high-quality data. Jira automation also supports triggers tied to devops events such as pull requests, builds, branches, commitments, and deployments. These events connect with Bitbucket, GitLab, and Github and update Jira issue or version status based on developer activities performed in version control. More advanced triggers can run on a defined schedule or respond to webhooks. Teams using these two triggers can get very creative with integrating Jira workflows with other tools or automating administrative tasks on a schedule. Once you configure the trigger, you have the option to add more filtering conditions or to branch the flow and support different sets of actions.


How trusted data is driving resilience and transformation beyond Covid-19

Over the next three to five years, most business workflows will be disrupted by the application of data and artificial intelligence (AI). Efficiency will be prioritised because it underpins business survival. If we take power and utilities as an example, we can expect disruption of the billing workflow, call centres, customer onboarding, customer service, and distribution. Document intelligence will also be used to glean insights from large volumes of information. Ultimately, data and AI will reinvent the entire end-to-end value chains of industries. Companies that recognise the strategic value of data will be the leaders in digital transformation, giving them a competitive position in the market. ... The pandemic has highlighted the value of data since having and sharing information on individuals will be key to defeating the virus. So, in the evolving normal, we can expect more data-sharing platforms – platforms that allow the public sector to share information with the private sector and platforms that allow different companies within the private sector to share information with each other. Boundaries between sectors will blur over time and regulation will adapt to accommodate data sharing.


Bluetooth Bug Opens Devices to Man-in-the-Middle Attacks

The Bluetooth SIG is recommending that potentially vulnerable Bluetooth implementations introduce the restrictions on CTKD that have been mandated in Bluetooth Core Specification versions 5.1 and later. These restrictions prevent the overwrite of an authenticated key or a key of a given length with an unauthenticated key or a key of reduced length. “The Bluetooth SIG is also broadly communicating details on this vulnerability and its remedies to our member companies and is encouraging them to rapidly integrate any necessary patches,” according to Bluetooth. “As always, Bluetooth users should ensure they have installed the latest recommended updates from device and operating system manufacturers.” Several Bluetooth-based attacks have cropped up over the past year. In May, academic researchers uncovered security vulnerabilities in Bluetooth Classic that could have allowed attackers to spoof paired devices and capture sensitive data. In February, meanwhile, a critical vulnerability in the Bluetooth implementation on Android devices was discovered that could allow attackers to launch remote code-execution (RCE) attacks – without any user interaction.


Australia’s very small step to make the Internet of Things safer

Security flaws in IoT devices are common. Hackers can exploit those vulnerabilities to take control of devices, steal or change data, and spy on us. In recognition of these risks, the Australian government has introduced a new code of practice to encourage manufacturers to make IoT devices more secure. The code provides guidance on secure passwords, the need for security patches, the protection and deletion of consumers’ personal data and the reporting of vulnerabilities, among other things. The problem is the code is voluntary. Experiences elsewhere, such as the United Kingdom, suggest a voluntary code will be insufficient to deliver the protections consumers need. ... A better option would have been a “co-regulatory” approach. Co-regulation mixes aspects of industry self-regulation with both government regulation and strong community input. It includes laws that create incentives for compliance (and disincentives against non-compliance) and regulatory oversight by an independent (and well-resourced) watchdog. The Australia government has, at least, described its new code of practice as “a first step” to improving the security of IoT devices.


Four ways network traffic analysis benefits security teams

The SecOps team will often need the network data and behavior insights for security analytics or compliance audits. This will usually require network metadata and packet data from physical, virtual and cloud-native elements of the network deployed across the data center, branch offices and multi-cloud environments. The easier it is to access, index and make sense out of this data (preferably in a “single pane of glass” solution), the more value it will provide. Obtaining this insight is entirely feasible but will require a mix of physical and virtual network probes and packet brokers to gather and consolidate data from the various corners of the network to process and deliver it to the security tool stack. NDR solutions can also offer the SecOps team the ability to capture and retain network data associated with indicators of compromise (IOCs) for fast forensics search and analysis in case of an incident. This ability to capture, save, sort and correlate metadata and packets allows SecOps to investigate breaches and incidents after the fact and determine what went wrong, and how the attack can be better recognized and prevented in the future.


A Beginner’s Introduction To DevOps Principles

To put it simply, DevOps is all about integrating these two teams together (hence the portmanteau of a name). It isn’t going to make your developers into sysadmins, or vice versa, but it should help them work together. Each aspect and phase is complemented with tools that make this whole process easier. DevOps is more than just tools and automation, and implementing a set of “DevOps tools” won’t automatically make your team work twice as fast, but these tools are a major part of the process, and it’d be hard to be as efficient without some of them. ... Rather than testing and building only once when everything is finished, in a DevOps environment, each developer will ideally submit changes to source control multiple times a day, whenever issues are complete or a minor milestone is reached. This allows the build and testing phases to start early, and make sure no developer gets too far away from the HEAD of the master source control. This stage is mostly about proper source control management, so having an effective git service like GitHub, Gitlab, or BitBucket are crucial to keeping continuous integration running smoothly. You don’t have to deploy every commit to production right away, but quick automated deployments are a major part of being able to push rapid releases.


It's the biggest job in tech. So why can't they find anyone to do it?

The failure to appoint a senior leader to coordinate the mammoth task of digitizing public services is at odds with the government's rhetoric. Three years ago, the UK re-iterated the need to create a "government as a platform" in a brand-new digital strategy, with the objective of harnessing the potential of digital to improve the efficiency of public services. The goal? To enable "digital by default" across government, and use technology and data to better serve citizens with digitally enabled public services that would be easier, simpler and cheaper. Since then, many reports have emerged stressing the difficulty of achieving this digital transformation journey without proper management from the very top. Last year, for instance, a report from the House of Commons' Science and Technology Committee found that the government's digital momentum was slowing, and that the shift was partly due to a lack of senior leadership. These failures have been especially palpable in the past few months. As the global COVID-19 pandemic threw the world upside down, the need for a government that effectively delivers digital services in a time of crisis became ever-more important.


Visa Warns of Fresh Skimmer Targeting E-Commerce Sites

The Visa alert does not indicate how Baka is initially delivered to a network. But the report notes that the malicious code is hosted on several suspicious domains, including: jquery-cycle[.]com, b-metric[.]com, apienclave[.]com, quicdn[.]com, apisquere[.]com, ordercheck[.]online and pridecdn[.]com. Once the initial infection takes hold, the skimmer is uploaded through the command-and-control server, but the code loads in memory. This means the malware is never present on the targeted e-commerce firm's server or saved to another device, helping it to avoid detection, according to the alert. "The skimming payload decrypts to JavaScript written to resemble code that would be used to render pages dynamically," according to Visa. Once embedded in an e-commerce site's checkout page, the skimmer begins to collect payment and other customer data from various fields and sends the information to the fraudsters' command-and-control server, Visa notes. Once data exfiltration is complete, Baka performs a "clean-up" function that removes the skimming code from the checkout page, according to the alert. This also helps ensure that JavaScript is not spotted by anti-malware tools.


Elon Musk is one step closer to connecting a computer to your brain

While the development of this futuristic-sounding tech is still in its early stages, the presentation was expected to demonstrate the second version of a small, robotic device that inserts tiny electrode threads through the skull and into the brain. Musk said ahead of the event he would “show neurons firing in real-time. The matrix in the matrix.” And he did just that. At the event, Musk showed off several pigs that had prototypes of the neural links implanted in their head, and machinery that was tracking those pigs’ brain activity in real time. The billionaire also announced the Food and Drug Administration had awarded the company a breakthrough device authorization, which can help expedite research on a medical device. Like building underground car tunnels and sending private rockets to Mars, this Musk-backed endeavor is incredibly ambitious, but Neuralink builds on years of research into brain-machine interfaces. A brain-machine interface is technology that allows for a device, like a computer, to interact and communicate with a brain. 




Quote for the day:

"The actions of a responsible executive are contagious." -- Joe D. Batton