Daily Tech Digest - January 24, 2021

The work-from-home employee’s bill of rights

Keeping business and personal data separate is straightforward for most cloud services, so legitimate security concerns can be addressed in such hybrid environments. Only in areas where IT cannot reasonably ensure security may businesses disallow specific optional technologies or hybrid usage. (The employee should be made aware that in such mixed-usage cases that, should there ever be a legal proceeding, their personal devices used for work could be subject to discovery and thus be taken during the course of an investigation.) IT also must allow the use of personal services in such mixed-usage environments, such as allowing users to use personal Slack, Zoom, or Skype accounts for personal communications rather than blocking such software to force the use of a corporate standard. Instead, managers would enforce the use of corporate-standard technology for business purposes, not IT through technology barriers. The basic principle should be that employees can bring their own technology into the mix unless it creates a clear security issue — and not a theoretical one, since IT too often cites security as an easy reason to say no to employee requests despite any real evidence of a risk.


Artificial Intelligence Collaboration in Asia’s Security Landscape

Though the field of AI – a catchall term for a set of technologies that enable machines to perform tasks that require human-like capabilities – has been around for decades, interest in it has surged over the past few years, including across the Asia-Pacific, with individual countries beginning to develop their own national approaches and multilateral groupings such as the OECD formulating guidance such as principles on AI. In the security realm more specifically, AI is emerging as a key topic for defense policymakers and communities alike in a range of areas, from assessments of its impact on geopolitical competition to areas of potential collaboration between some Indo-Pacific partners and their expert communities. It has also been a topic of discussion among scholars and policymakers in annual Asian security fora such as the Shangri-La Dialogue and the Xiangshan Forum. Seen from this perspective, Mohamad’s highlighting of AI as an area of focus for Asian defense establishments was very much in keeping with these trends. As he noted in his keynote address, AI represents an emerging domain where armed forces and defense establishments can play a key role in efforts to “strengthen the international order and enhance practical cooperation” by promoting responsible state behavior, building confidence, and fostering international stability. 


Is neuroscience the key to protecting AI from adversarial attacks?

For the new research, Cox and DiCarlo joined Joel Dapello and Tiago Marques, the lead authors of the paper, to see if neural networks became more robust to adversarial attacks when their activations were similar to brain activity. The AI researchers tested several popular CNN architectures trained on the ImageNet dataset, including AlexNet, VGG, and different variations of ResNet. They also included some deep learning models that had undergone “adversarial training,” a process in which a neural network is trained on adversarial examples to avoid misclassifying them. The scientist evaluated the AI models using the BrainScore metric, which compares activations in deep neural networks and neural responses in the brain. They then measured the robustness of each model by testing it against white-box adversarial attacks, where an attacker has full knowledge of the structure and parameters of the target neural networks. “To our surprise, the more brainlike a model was, the more robust the system was against adversarial attacks,” Cox says. “Inspired by this, we asked if it was possible to improve robustness (including adversarial robustness) by adding a more faithful simulation of the early visual cortex — based on neuroscience experiments — to the input stage of the network.”


Speed Limits in Software Development

For software development there aren’t road signs telling us a safe speed to deploy at, but perhaps we can extend the driving metaphor a bit more to help us think this through. One thing that relates to safe speed is responsiveness. A slick road makes it harder for your car to respond to changes in direction, and slow deployment makes it hard to respond to problems with your application. How easy is it to respond to issue in your application? Don’t forget that an F1 race car with new tires and perfect tuning can respond a lot better than the little commuter car you might have. We can tune our code and deployments and get better at responsiveness over time. If the road is foggy and you can’t see where you are going when you drive, I hope you slow down. If you can’t see what is going in your application and understand how it is being used, I hope you slow down. ... So how fast can we go in software development? Well, in the ideal case if we know everything and have a smooth path ahead of us, pretty fast. I don’t think we can get to a land speed record since software development doesn’t often involve going in a straight line, but with a bit of work on the code and deployment process and with investment in observability and operations, I think we can go pretty fast, pretty safely. Just be careful.


Why the brain will always win in the battle against AI

What we call ‘intelligence’ is an activity of the brain. The outcome of that activity forms our ‘mind’ about things. Even when we sleep, our intelligence is awake and our mind is being formed. In this context, we must pay attention to the concept of duality as the first level of multivariate analysis. A hallmark of intelligence is the willingness to change one's mind. Humans can think in terms of ranges, options and spectral possibilities. Machines are only about specificity and exactness. Computing doesn’t entertain opinion. Yet, calculation is merely one aspect of our mental ability. It has been exaggerated in our education system. This kind of logic-based intelligence is quite self-conscious. We are assessed for deductive ability. We are tutored to think and know but not trained to ‘think about thinking’ or ‘know about not knowing’. We are barely taught any self-awareness. Emotional Intelligence is neglected. We are coached in analytical hindsight and acquire a punter’s foresight based on the computation of odds. No one educates us on esteem, gratification, empathy, or seduction. We learn these things by ourselves. The irony is that machines have beaten us on all those aspects that we acquire via structured learning and tutoring. It is in the emotional, subjective and artistic areas that mankind holds the advantage.


Data bill: The security vs privacy debate

Encryption is widely acknowledged as the strongest feature of data protection. Digital banking and financial transactions have increased manifold with the Reserve Bank of India prescribing the encryption standards. The telecom sector, however, is limping along on 40-bit key encryption, which is considered to be low. Both cellular voice and messaging are vulnerable to off-air interceptions, with experts pointing at the weakness of SMS being used as second factor authentication in banking, payments and Aadhaar identification. The Telecom Regulatory Authority of India has rightly recommended an update of regulation policy and is of the view that encryption is a reliable tool which should not be interfered with. The end-to-end encryption on chat platforms is the most secure method of keeping data safe from hackers and break-ins. The General Data Protection Regulation of the European Union strongly favours use of encryption for protecting individual data. However, security agencies around the world want decrypted data and favour legislation in this regard. The United States, United Kingdom and Australia support a legislation for decryption, while France and Germany are pro-encryption.


New motto for CIOs: Move even faster and make sure nothing breaks

The stakes are high for IT professionals running digital transformation projects with consequences ranging from missed bonuses to going out of business, according to a new survey. The current motto for survival is "Move even faster and make sure nothing breaks," IT leaders told Kong in the company's 2021 Digital Innovation Benchmark report. Sixty-two percent of tech leaders said they are at risk of being replaced by competitors who innovate more quickly, according to the survey. Also, 51% of respondents said they will survive only three years before being acquired or simply going out of business if they can't evolve fast enough. That number goes up to 84% when the make-or-break timeline extends to six years. This number is up from 71% in last year's survey. ... The survey reinforces what many companies realized at the end of 2020: The pandemic accelerated digital transformation in general and cloud migrations in particular. Almost 40% of tech leaders in the US and Europe said that their companies also implemented microservices sooner than expected due to the pandemic.  A majority of respondents (87%) said that microservice-based applications, distributed applications, and open source software are the future of IT architecture. 


Reflect brings automated no-code web testing to the cloud

Every company is now a software company, or so we’re told, meaning they have to employ designers and developers capable of building websites and apps. In tandem, the much-reported software developer shortage means companies across the spectrum are in a constant battle for top talent. This is opening the doors to more automated tools that democratize some of the processes involved in shipping software, while freeing developers to work on other mission-critical tasks. It’s against this backdrop that Reflect has come to market, serving as an automated, end-to-end testing platform that allows businesses to test web apps from an end user’s perspective, identifying glitches before they go live. Founded out of Philadelphia in 2019, the Y Combinator (YC) alum today announced a $1.8 million seed round of funding led by Battery Ventures and Craft Ventures, as it looks to take on incumbents with a slightly different proposition. Similar to others in the space, Reflect hooks into the various elements of a browser so it can capture actions the user is taking, including scrolls, taps, clicks, hovers, field entry, and so on. This can be replicated later as part of an automated test to monitor the new user signup flow for a SaaS app, for example.


Immigration exemption in data protection law faces further legal challenge

Speaking to Computer Weekly about the appeal, Scotland director at ORG Matthew Rice said the exemption, which is the first derogation of its kind in 20 years of UK data protection law, has been justified by the UK government on the grounds it needs to “stop people from learning that they’re about to be removed from the country” and consequently absconding. “There was no evidence to suggest that under previous data protection law…people were making subject access requests [SARs], getting back that they were due to get a visit from the immigration services, and then running away,” he said. “The other thing to bear in mind is that the exemption is blunt because immigration control isn’t defined in the act or in any part of UK law, and it’s not just about the Home Office or borders. Any data controller can apply this exemption – it’s available to your doctor, your landlord, your school, your local authority, any number of persons that might hold personal data about you.” ... The non-disclosure of personal data under the immigration exemption therefore not only interferes with the individual’s access rights, but a host of other digital rights granted by the GDPR as well, including the rights to rectification, erasure and restriction of processing.


How security pros can prepare for a tsunami of new financial industry regs in 2021

Biometrics can add an extra layer of security when unlocking a smartphone using a person’s face or fingerprint. But other technologies have raised privacy concerns among consumers, such as law enforcement leveraging facial recognition to identify wanted criminals via security cameras in a public space. This has led to outright bans of facial recognition technology in several cities, including Boston, San Francisco, Oakland, Portland, Oregon and Portland, Maine, to name a few. As these technologies become mainstream, we’ll need regulations to retain (or in some cases, regain) the trust of consumers and policymakers. As a step forward, we see international organizations push for global standards around the use of biometrics, for example, the FIDO Alliance and the Financial Action Task Force (FATF), which recently issued guidance on how to apply a risk-based approach to using digital identity systems for customer identification and verification. However, the U.S. lags behind other regions, which have been more progressive in their adoption of regulations, such as the General Data Protection Regulation (GDPR) in Europe. In lieu of federal standards, states such as California have implemented their own regulations, such as the California Consumer Protection Act (CCPA) and its upgrade, the California Privacy Rights Act (CPRA). 



Quote for the day:

"The first step of any project is to grossly underestimate its complexity and difficulty." -- Nicoll Hunt

Daily Tech Digest - January 22, 2021

Why it's vital that AI is able to explain the decisions it makes

The effort to open up the black box is called explainable AI. My research group at the AI Institute at the University of South Carolina is interested in developing explainable AI. To accomplish this, we work heavily with the Rubik’s Cube. The Rubik’s Cube is basically a pathfinding problem: Find a path from point A – a scrambled Rubik’s Cube – to point B – a solved Rubik’s Cube. Other pathfinding problems include navigation, theorem proving and chemical synthesis. My lab has set up a website where anyone can see how our AI algorithm solves the Rubik’s Cube; however, a person would be hard-pressed to learn how to solve the cube from this website. This is because the computer cannot tell you the logic behind its solutions. Solutions to the Rubik’s Cube can be broken down into a few generalized steps – the first step, for example, could be to form a cross while the second step could be to put the corner pieces in place. While the Rubik’s Cube itself has over 10 to the 19th power possible combinations, a generalized step-by-step guide is very easy to remember and is applicable in many different scenarios. Approaching a problem by breaking it down into steps is often the default manner in which people explain things to one another.


Why KubeEdge is my favorite open source project of 2020

The KubeEdge architecture allows autonomy on an edge computing layer, which solves network latency and velocity problems. This enables you to manage and orchestrate containers in a core data center as well as manage millions of mobile devices through an autonomous edge computing layer. This is possible because of how KubeEdge uses a combination of the message bus (in the Cloud and Edge components) and the Edge component's data store to allow the edge node to be independent. Through caching, data is synchronized with the local datastore every time a handshake happens. Similar principles are applied to edge devices that require persistency. KubeEdge handles machine-to-machine (M2M) communication differently from other edge platform solutions. KubeEdge uses Eclipse Mosquitto, a popular open source MQTT broker from the Eclipse Foundation. Mosquitto enables WebSocket communication between the edge and the master nodes. Most importantly, Mosquitto allows developers to author custom logic and enable resource-constrained device communication at the edge.


DevOps, DevApps and the Death of Infrastructure

The godfather of the DevOps movement, Patrick Debois, often speaks about how we are moving to a more service-oriented or serviceful intranet. I have been calling this riff on DevOps deployment methodology, DevApps. This is an emerging design pattern where cloud native applications are a combination of bespoke services (like Twilio, Salesforce, and many others) alongside custom software deployed as functions on scale-to-zero web services like Amazon Lambda. Services are being managed with Terraform, just as the services of the past had been managed by Chef or Puppet. Once organizations tackle the well-accepted practice to automate deployment, the next frontier is to create applications that are composable via automated means. What we’re talking about here is layering integration-as-code on top of infrastructure-as-code. With a wide variety of cloud services at their disposal, application developers need not worry about the latter — just the former. At TriggerMesh, we are seeing more and more organizations looking to create applications that are configured with automated workflows on the fly.


5 Qualities Of Highly Engaged Teams

Trust is not just the cornerstone of leadership. It is also a fundamental building block in high-performance teams. When teams trust each other, it gives them more confidence in their abilities. They know they will get support when needed. Also, they will be willing to provide support to teams in need. This collaboration and cooperation help the sharing of best practices, which brings the level of the whole team, or teams higher. Trust is one of those reflexive qualities; the more the leader shows trust, the more they will be trusted. The more we trust our teams, the more they will trust themselves and each other. Leaders need to be the role model when it comes to this but also need to go that extra step to providing support and also to ask for it. Leaders who can show this vulnerability make it ok for their teams to ask for help when needed, as well as give it. Teams that consistently deliver are teams that feel empowered, teams that understand what needs to be done and have the tools to achieve it. This empowerment boost self-confidence and belief that the teams will reach their goals. Being engaged is great, but if you’re empowered, this can lead to frustration and disengagement.


Four key real world intelligent automation trends for 2021

In 2021, there will be an overdue re-think of how organisations choose RPA and intelligent automation technologies. We’ll see greater selection rigour fuelling more informed assessments of these technologies’ abilities to successfully operate and scale in large, demanding, front-to back-office enterprise environments, where performance, security, flexibility, resilience, usability, and governance are required. ... For a RPA or intelligent automation programme to really deliver, a strategy and purpose is needed. This could be improving data quality, operational efficiency, process quality and employee empowerment, or enhancing stakeholder experiences by providing quicker, more accurate responses. By examining the experiences and proven outcomes experienced by those organisations with mature automation programs, we’ll see more meaningful methods of measuring the impact of RPA and intelligent automation. ... This year, there will also be a greater understanding of which vendor software robots really possess the ability to be ‘the’ catalyst for digital transformation. These robots are typically pre-built, smart, highly productive and self-organising processing resources, that perform joined up, data-driven work across multiple operating environments of complex, disjointed, difficult to modify legacy systems and manual workflows.


Why North Korea Excels in Cybercrime

The cybercrime market's size and the scarcity of effective protection continue to be a mouth-watering lure for North Korean cyber groups. The country's cyber operations carry little risk, don't cost much, and can produce lucrative results. Nam Jae-joon, the former director of South Korea's National Intelligence Service, reports that Kim Jong Un himself said that cyber capabilities are just as important as nuclear power and that "cyber warfare, along with nuclear weapons and missiles, is an 'all-purpose sword' that guarantees our [North Korea's] military's capability to strike relentlessly." Other reports note that in May 2020, the North Koreans recruited at least 100 top-notch science and technology university graduates into its military forces to oversee tactical planning systems. Mirim College, dubbed the University of Automation, churns out approximately 100 hackers annually. Defectors have testified that its students learn to dismantle Microsoft Windows operating systems, build malicious computer viruses, and write code in a variety of programming languages. The focus on Windows may explain the infamous North Korean-led 2017 WannaCry ransomware cyberattack, which wrought havoc in more than 300,000 computers across 150 countries by exploiting vulnerabilities in the popular operating system.


To see the future more clearly, find your blind spots

There are multiple causes for the blind spots. One is a persistent state of denial, described in four parts by an emergency management professional after Hurricane Katrina: “One is, it won’t happen. Two is, if it does happen, it won’t happen to me. Three: If it does happen to me, it won’t be that bad. And four: If it happens to me and it’s bad, there’s nothing I can do to stop it anyway.” To this, I’m sure we can now add a fifth rationalization: “It won’t happen again.” Denial, however, has never been a successful strategy. An additional cause of blind spots is an overreliance on available data. Executives have benefited greatly from increased insights derived through analytics and other sophisticated methods of pattern recognition. The limitation of these tools, however, is that they can’t detect the “dog that didn’t bark,” a reference to a Sherlock Holmes case in which the crucial clue is not what happened but what did not. Leading is, in part, about bringing an organization into the future, and so executives should sharpen their thinking to include not only what they can see clearly but also what they can’t. A third cause is conditions that can tightly bind thinking.


Being Future Ready Is The Only Way To Survive In Data Science Field

There are three key skills for any data scientist– a stronghold on mathematics and statistics. Secondly, you need a programming language base for different tasks such as data processing, storage, etc. Lastly, domain knowledge. When you are working in a company, you must think about what value you are adding. Having acquired these skills next comes constant upgradation and upskilling. There is a sea of resources available online. For example, Coursera and EDx are good sources for theoretical introductions to a variety of topics. For a more practical approach, aspirants may check Datacamp and Udemy. I would also suggest using Kaggle, participating in hackathons, and undertaking internships to gain an edge. It is also important to think from the perspective of being ready for future challenges, given this field’s dynamic nature. It does get difficult to catch up with every new model or concept. I find it difficult too. What I tend to do is I try to look at the bigger picture, and once a tech starts picking pace, I spend time understanding it. The secret lies in following a broad macro trend, not just in DS but in complete tech space.


How to implement a DevOps toolchain

A good DevOps toolchain is a progression of different DevOps tools used to address a specific business challenge. Connected in a chain, they guarantee a profitable cycle between the front-end and back-end developers, quality analyzers, and customers. The goal is to automate development and deployment processes to ensure the rapid, reliable, and budget-friendly delivery of innovative solutions. We found out that building a successful DevOps toolchain is not a simple undertaking. It takes experimentation and nonstop refinement to guarantee that essential processes are fully automated. A DevOps toolchain automates all of the technical elements in your workflow. It also gets different teams on the same page so that you can focus on a business strategy to drive your organization into the future. We have come to identify five all the more valid benefits in support of the DevOps toolchain implementation. ... A fully enabled and properly implemented DevOps toolchain propels your innovation initiatives from start to end and ensures prompt deployment. Your toolchain will look different than this, depending on your requirements, but I hope seeing our workflow gives you a sense of how to approach automation as a solution.


3 Essential Steps to Exploit the Full Power of AI

A key to generating a good ROI is in executing data, automation, analytics and AI initiatives. Close to 23% of respondents have already set up or are in the process of setting up an AI Center of Excellence that shares and coordinates resources across different areas of the company. This number has risen from 18% just a year back. Also, nearly 19% of companies have a company-wide AI leader who oversees AI strategy and governance. The reason why such an integrated delivery model makes sense is the convergence of the cloud infrastructure that provides the storage and compute, the data that is the raw material for the analysis, the automation that operates on the technology infrastructure, the analytics that operates on the data to generate better insights, and the AI that enhances both the automation and the analytics resulted in decreased costs and better revenues. In large (greater than $1 billion revenues) companies the existing data and analytics group have expanded their remit to include AI. Companies that currently have separate centers of excellence (COE) for analytics and/or automation and/or AI must integrate, or the very least, coordinate their initiatives. Doing so would provide more seamless integration and yield better ROI. Companies that are just starting their journey in analytics and AI can start with an analytics or automation COE that expands to include AI capabilities.



Quote for the day:

"Our expectation in ourselves must be higher than our expectation in others." -- Victor Manuel Rivera

Daily Tech Digest - January 21, 2021

15 SLA mistakes IT leaders still make

SLAs have often been a point of contention ­— not only between providers and customers, but within organizations themselves. “It often boils down to IT leaders hating to read legal agreements while procurement and legal teams can be focused on business and financial risk rather than IT dependencies or the impact of system outages to delivering services,” says Joel Martin, cloud strategies research vice president at HFS Research. And as companies move more solutions to the cloud, understanding the service levels agreed to is important to developing trusted and dependable relationships. Moreover, SLA development and management has evolved significantly in recent years, with an eye toward driving business value. “Service recipients have become far more sophisticated in how they manage SLAs,” says Marc Tanowitz, managing director with West Monroe, adding that they “are looking for end-to-end outcomes that drive business success and recognize that the true value of SLAs is to drive business insights and performance — rather than to reduce the cost of service by capturing performance credits.” Nonetheless, there remain some common — and potentially costly — SLA mistakes IT leaders can make. Following are some of the most detrimental to the IT organization and the business at large.


Ransomware provides the perfect cover

Attackers are constantly creating new variants that evade detection by traditional signature-based approaches. To counteract these attacks, firms need to have defence in depth. This starts with preventing threat actors from infiltrating the network by defending against tactics such as phishing and malware campaigns through staff training, the use of strong passwords, 2FA, and patch management. If a threat actor makes it onto the system, their potential for lateral movement is limited when organizations have deployed a least-privilege approach, where access to files and folders is limited based on job role or seniority. Behavioral anomalies are a prime indicator that a threat actor could be on the network. This includes encrypting or downloading large amounts of data or user accounts trying to access restricted data. Successfully spotting such behaviour requires correlating data from many sources, including endpoint and network detection and response solutions. Finally, to ensure they can recover quickly in the event of a ransomware attack, organizations must also have robust backups that they can rely on if their network does go down.


Cisco tags critical security holes in SD-WAN software

The first critical problem–with a Common Vulnerability Scoring System rating of 9.9 out of 10–is vulnerability in the web-based management interface of Cisco SD-WAN vManage Software.  “This vulnerability is due to improper input validation of user-supplied input to the device template configuration,” Cisco stated. “An attacker could exploit this vulnerability by submitting crafted input to the device template configuration. A successful exploit could allow the attacker to gain root-level access to the affected system.” This vulnerability affects only the Cisco SD-WAN vManage product, the company stated. The second critical Cisco SD-WAN Software issue–with a CVSS rating of 9.8—could let an unauthenticated, remote attacker to cause a buffer overflow. “The vulnerability is due to incorrect handling of IP traffic,” Cisco stated. “An attacker could exploit this vulnerability by sending crafted IP traffic through an affected device, which may cause a buffer overflow when the traffic is processed. A successful exploit could allow the attacker to execute arbitrary code on the underlying operating system with root privileges.”


Microsoft Releases New Info on SolarWinds Attack Chain

According to Microsoft, the attackers achieved this by using a known MITRE attack method called event triggered execution, where malicious code is executed on a host system when a specific process is launched. In this case, the threat actors used the SolarWinds process to create a so-called Image File Execution Options (IEFO) registry value for running the malicious VBScript file when the dllhost dot exe process is executed on the infected system. The dllhost dot exe process is a legitimate Windows process for launching other applications and systems. When triggered, the VBScript then runs another executable that activates the Cobalt Strike DLL in a process that is completely disconnected and separate from the SolarWinds process. The VBScript then also deletes the IEFO registry value and other traces of the sequence of events that happened, according to Microsoft. The full motives behind the operation and its victims remain unclear — or at least publicly undisclosed — though some believe it may have been for corporate espionage or spying. FireEye, Microsoft, the US Cybersecurity and Infrastructure Security Agency (CISA), and numerous others have described the operation as being the work of a highly sophisticated state-backed actor. 


Accessible 5G: Making it a reality

To make 5G truly accessible to businesses, customers and consumers, we need to improve connectivity for all by eventually converging cellular and satellite networks to provide coverage both on land and via geo-satellite. While 3G and 4G were primarily created to improve mobile services for mobile device users, 5G is expected to support a much wider scope of IoT applications. With more intelligence being packed into smart, connected devices – we’ll need seamless connectivity and coverage. The hybrid network will enable all types of industries, from education and healthcare to construction and manufacturing, to not only use IoT technology to improve services and efficiencies but remove operational complexities, such as in-building coverage for more remote locations and black spots in connectivity when laying foundations – think basement renovations and housing developments in remote landscapes. As 5G-enabled smart devices and IoT applications increase, so too will the volume of data transactions between devices in the home: Smartphones, tablets, TVs, voice-assistance, and white goods like refrigerators and smart ovens. The sheer volume of applications transferring data to communicate with each other, for example, using voice assistance to dim the lights and select a film to watch for a night in, will require robust and seamless connectivity for the perfect experience.


Fueled by Record Profits, Ransomware Persists in New Year

In 2020, exfiltrating data from victims before crypto-locking their systems and naming and shaming victims via leaks sites became common. Pioneered by the now-defunct Maze group in late 2019, many other groups followed suit. Those include Clop, DoppelPaymer, Nefilim, Sekhmet and, more recently, Avaddon. DoppelPaymer was also tied to an attack against a hospital in Germany, which led to a seriously ill patient having to be rerouted to another hospital. "This individual later died, though German authorities ultimately did not hold the ransomware actors responsible because the German authorities felt the individual's health was poor and the patient likely would have died even if they had not been re-routed," the FBI notes in a private industry alert issued last month. For exfiltrating data, "size doesn't matter" for attackers, Sophos says. "They don't seem to care about the amount of data targeted for exfiltration. Directory structures are unique to each business, and some file types can be compressed better than others. We have seen as little as 5GB, and as much as 400GB, of compressed data being stolen from a victim prior to deployment of the ransomware." 


The state of the dark web: Insights from the underground

According to Raveed Laeb, product manager at KELA, the dark web of today represents a wide variety of goods and services. Although traditionally concentrated in forums, dark web communications and transactions have moved to different mediums including IM platforms, automated shops, and closed communities. Threat actors are sharing covert intelligence on compromised networks, stolen data, leaked databases and other monetizable cybercrime products through these mediums. “The market shifts are focused on automation and servitization [subscription models], aimed at aiding the cybercrime business to grow at scale,” says Laeb. “As can be witnessed by the exponential rise of ransomware attacks leveraging the underground financial ecosystem, the cybercriminal-to-cybercriminal markets allow actors to seamlessly create a supply chain that supports decentralized and effective cybercrime intrusions—giving attackers an inherent edge.” ... “Defenders can exploit these robust and dynamic ecosystems by gaining visibility into the inner workings of the underground ecosystem—allowing them to trace the same vulnerabilities, exposures, and compromises that would be leveraged by threat actors and remediate them before they get exploited,” says Laeb.


New MIT Social Intelligence Algorithm Helps Build Machines That Better Understand Human Goals

While there’s been considerable work on inferring the goals and desires of agents, much of this work has assumed that agents act optimally to achieve their goals. However, the team was particularly inspired by a common way of human planning that’s largely sub-optimal: not to plan everything out in advance, but rather to form only partial plans, execute them, and then plan again from there. While this can lead to mistakes from not thinking enough “ahead of time,” it also reduces the cognitive load.  For example, imagine you’re watching your friend prepare food, and you would like to help by figuring out what they’re cooking. You guess the next few steps your friend might take: maybe preheating the oven, then making dough for an apple pie. You then “keep” only the partial plans that remain consistent with what your friend actually does, and then you repeat the process by planning ahead just a few steps from there.  Once you’ve seen your friend make the dough, you can restrict the possibilities only to baked goods, and guess that they might slice apples next, or get some pecans for a pie mix. Eventually, you’ll have eliminated all the plans for dishes that your friend couldn’t possibly be making, keeping only the possible plans (i.e., pie recipes). Once you’re sure enough which dish it is, you can offer to help.


5G: Opportunities and Challenges for Electric Distribution Companies

While the primary focus for this new technology from a common carrier’s perspective seems to center around broadband services, the most likely areas that will be important to electric utilities will be the increased capacity to support field area network needs for connected grid devices. The "Grid of Things" will greatly benefit from the connectedness afforded by the larger IoT. "We plan to leverage our AMI network for connectivity needs, but that may change as we deploy more 'grid-edge' devices," said an executive of a mid-sized mid-Atlantic utility. Low-latency services potentially offer the opportunity to leverage this technology to support mission critical applications, such as protective relay management, SCADA, and substation communications. "Use of 5G can potentially provide SCADA and other system data over a cellular network versus a hard-wired solution through fiber or copper," said a general manager of a Connecticut public utility. The high data rate mmWave wireless broadband services may be applied to augmented/virtual reality (AR/VR), an area where some utilities like Duke Energy and EPRI are actively leveraging, and unmanned aerial vehicles (UAVs) that will improve asset management and visualization.


Financial institutions can strengthen cybersecurity with SWIFT’s CSCF v2021

SWIFT created the CSP to support financial institutions in protecting their own environments against cybercrime. The CSP established a common set of security controls, the Customer Security Controls Framework (CSCF), designed to help users secure their systems with a list of mandatory controls, community-wide information sharing initiatives, and security features on their payment infrastructure. The CSCF is designed to evolve based on threats observed across the transaction landscape. The CSCF’s controls are centered around three overarching objectives: Secure your environment; Know and limit access; and Detect and respond. The updated CSCF v2021 includes changes to existing controls and additional guidance and clarification on implementation guidelines. The newest version includes 31 security controls, 22 mandatory controls, and 9 advisory controls. Mandatory controls must be implemented by all users on the user’s local SWIFT infrastructure. Advisory controls are based on recommended best practices advised by SWIFT.



Quote for the day:

"Education is what survives when what has been learned has been forgotten." -- B. F. Skinner

Daily Tech Digest - January 20, 2021

New Intel CPU-level threat detection capabilities target ransomware

Detecting ransomware programs has never been easy, and attackers have always found ways to evade security products. The sophisticated groups that use manual hacking and perform months-long reconnaissance and lateral movement inside corporate networks will know very well what malware detection software their victims are using and can test in advance to make sure their payload will not be detected. This is part of the reason why ransomware campaigns are so effective and devastating to organizations. Aside from signature-based detection, security products attempt to detect ransomware-like behavior by monitoring for unusual patterns in file activity. For example, the reading and writing of a large number of files in certain directories or with certain file types in rapid succession can indicate suspicious activity. Significant differences in the contents of overwritten files is another example since an encrypted file will look totally different than the original file. Attempts to delete Volume Shadow Copy Service (VSS) backups can also be indicative of ransomware. All these signals together can be used to detect ransomware, but attackers can still try to hide, for example, by slowing down file encryption and executing it in batches.


Streaming Data From Files Into Multi-Broker Kafka Clusters

Kafka Connect is a tool for streaming data between Apache Kafka and other external systems and the FileSource Connector is one of the connectors to stream data from files and FileSink connector to sink the data from the topic to another file. Similarly, numerous types of connector are available like Kafka Connect JDBC Source connector that imports data from any relational database with a JDBC driver into an Apache Kafka topic. Confluent.io developed numerous connectors for import and export data into Kafka from various sources like HDFS, Amazon S3, Google cloud storage, etc. Connectors belong to commercial as well as Confluent Community License. Please click here to know more about the Confluent Kafka connector. File Source connector to stream or export data into Kafka topic and File Sink connector to import or sink data from the Kafka topic to another file. The file that receives the data continuously from the topic can be considered as a consumer. These two connectors are part of the Open Source Apache Kafka ecosystem and do not require any separate installation.


Legacy security architectures threaten to disrupt remote working

Connecting users often came at the expense of other factors, such as security, performance and management. As most respondents (81%) expect to continue working from home (WFH), 2021 will see enterprises address those other areas, evolving their remote access architectures to protect the remote workforce without compromising on the user experience. Yet securing the remote workforce has proved challenging for IT professionals. Enforcing corporate security policies on remote users was the second most common security challenge (58% of respondents) while 57% indicated they lacked the time and resources to implement recognised security best practices. Boosting remote access performance was found to be the most popular use case for 2021, by 47% of respondents. SASE was also an increasing focus for enterprises in post-pandemic 2021, with as many as 91% of respondents expecting SASE to simplify management and security. Half of respondents (52%) said SASE would be very or extremely important to their businesses post-Covid-19 and 91% of respondents expected SASE to simplify management and security. Providing evidence of how SASE is benefiting organisations, Cato found that of those firms that had already adopted SASE, 86% experienced increased security, 70% indicated time savings in management and maintenance...


Companies turning to MSPs as attack vectors get more sophisticated

Security is not the only top driver. Finance leaders chose reduced costs (57%) as their top reason, noting that an MSP is less expensive than hiring talent internally. For e-commerce retailers, increased security (46%) and reduced costs (46%) tied for the top spot. “It’s never been more critical to have an encrypted backup and disaster recovery solution to ensure your business is always up and running. The increased threats to companies and MSPs have never been this severe, and it’s going to continue to get worse,” said Infrascale CEO Russell P. Reeder. “In this ever more challenging landscape, data protection and data recovery are top priorities for MSPs serving clients, especially as attack surfaces expand and attack vectors get more sophisticated,” he continued. The survey further revealed which MSP services are most prominent for each industry. Finance (53%), education (51%), and healthcare (53%) executives all noted that the top service they leverage most with their MSPs is data protection, while manufacturing executives specified a subset of that category, cybersecurity services (58%) — focusing on computer network environments as their top MSP service.


Why CIOs Must Set the Rules for No-Code, Low-Code, Full-Code

A no-code application uses point-and-click visual tools that users drag and drop in order to create an application. No knowledge of coding is needed. This is strictly point-and-click development on a visual user interface that gives access to data, basic logic and data display choices. Best fit: No-code development works when the data and queries the user needs are basic and the tool can integrate with the data sources that have predefined APIs. No-code tools are ideal for rapid turnaround applications that use and report basic information -- like, what are the sales numbers for our air conditioning products this month? The tools are used with transactional data, not with unstructured, big data. Low-code development tools have point-and-click, graphical user interfaces that are similar to those found in no-code tools, only low code also allows developers to add pieces of custom code that embellish functions not handled by the low-code platform. Best fit: For applications that must be integrated with other systems and databases, as well as delivering rapid time to market, low-code tools make excellent platforms. Low code also enables non-programming users to collaborate in developing apps with more technical IT programmers.


Tips for a Bulletproof War Room Strategy

In today's environment, especially in larger companies, employee skill sets are getting more technically diverse with stand-alone teams spanning cloud, network, development, automation, and more. As much as these teams may want to work in their own lane, there is no denying that their work directly affects other groups in the organization. When they send updates or find an exploit that threatens their system, it's not just their system that is impacted. It can produce massive consequences across all areas of the business. ... In combat, one of the biggest mistakes that could cause you to lose your position is indecision. In security, when a breach occurs, teams can't afford to disagree. War rooms are built to enable quick decision-making by empowering need-to-know decision-makers with the authority needed to respond rapidly. An effective war room brings together the right people and the right information so that the right decisions can be quickly made. ... In another, you can elevate that war room into an actual live incident or bring together a group of senior management to plan out the risk posture for the foreseeable future, whether that's the next quarter, the next year, or maybe for a large upcoming event where they want to plan for attack possibilities.


Microsoft Taking Additional Steps to Address Zerologon Flaw

Some security experts say Microsoft is taking the right step to ensure that customers' networks remain safe even if they haven't applied the patch. "Microsoft seems to expect that patching all devices out there will take a substantial amount of time, so it takes this backup approach to mitigate the risk for its customers," says Dirk Schrader, global vice president at security firm New Net Technologies. "The difficulty for those customers, given the pandemic situation of working from home, is to find and patch all vulnerable devices. It is time to scan and check all devices, monitor them for unwanted changes, to find and patch as quickly as possible." Jigar Shah, vice president of security firm Valtix, notes that Active Directory remains important to companies that rely on cloud platforms, such as Azure. So, they want to be assured that their infrastructure is secure even if that requires Microsoft to force the issue. "Active Directory domain controllers are still fundamental to enterprise apps in public clouds," Shah says. "And the battle is to continuously and automatically do virtual patching until software vendors roll out patches that can be deployed, something that often takes weeks and months..."


Study: Cloud transformation necessary for digital transformation

Cloud migration is a necessary step for digital transformation, which is proceeding faster than planned at many enterprises because of the COVID-19 pandemic, according to research from Cloud Industry Forum (CIF), a cloud computing organization based in the United Kingdom. The cloud is an important steppingstone for getting off legacy on-prem technologies and outfitting today's more flexible, remote workforce. Supporting a remote workforce requires a digital transformation, and to do that, companies need the cloud – public, private, or hybrid. CIF found that in many sectors, remaining productive during lockdown depended on their cloud-readiness. Migrating to the cloud has delivered results for more than 90% of organizations during the past year, according to the CIF research. In addition, 91% of decision makers said that cloud formed an important part of their digital transformation, with 40% saying the role of the cloud was crucial. COVID-19 has been a significant driver. A majority of organizations (69%) have sped up their as digital transformation plans in some way as a result of the pandemic, according to the research. "On the whole, organizations did a commendable job of adapting in the face of an unprecedented situation; it is safe to say that many have been pleasantly surprised at how successful the shift to remote working has been. 


Digital Transformation: How Leaders Can Stand Out

Enterprise CIOs are contending with the impact of COVID-19 on their IT priorities and tech spending. In order to prioritize what is indispensable, there should be a strong focus on embracing technology that puts the bottom line first. There’s a huge opportunity to streamline repetitive, time-consuming tasks across departments, from marketing to sales and customer service, freeing up time and shortening feedback loops. Traditional digital transformation initiatives often overlook the edges of the business where employees are stuck relying on manual processes, spreadsheet solutions and outdated legacy systems for business workflows. Organizations have to be able to solve for changes quickly, whenever they may come up, from anywhere in the business. Having digital tools in place that allow for automation and enhanced processes are crucial not only for saving time and money, but also for providing real-time insights and opportunities to change to quickly adapt to meet customer demands, employees and overall disruption. The shortage of software developer talent is well-documented, and IT departments are overwhelmed without the support they desperately need.


2021 Trends in Blockchain: Mainstream Adoption at Last

The most emergent Blockchain trend of the year is the motion towards solving its scalability issues via the cloud. There are plentiful cryptocurrency use cases in which the notion of scale—both horizontal and vertical, reflecting mounting numbers of users and data—induces considerable latency, almost derailing this technology’s value. A practical solution to this necessity stemming from blockchain’s decentralized consensus approach to transaction validation is employing serverless computing architecture to resolve the latency resulting from the conventional approach, in which “every machine is doing the same work,” Wagner revealed. “If one runs out of space, memory, compute, or network capacity, game over.” However, by relying on serverless architecture to spin up machines on demand, “that serverless implementation lets us recruit hundreds, thousands, even tens of thousands of machines for every individual node of a blockchain,” Wagner explained. This method enables organizations to devote whatever resources they need to validate transactions with these decentralized ledgers, dramatically reducing the latency and downtime otherwise inherent to scaling up.



Quote for the day:

"Make every detail perfect and limit the number of details to perfect." -- Jack Dorsey

Daily Tech Digest - January 19, 2021

Superintelligent AI May Be Impossible to Control; That's the Good News

The researchers suggested that any algorithm that sought to ensure a superintelligent AI cannot harm people had to first simulate the machine’s behavior to predict the potential consequences of its actions. This containment algorithm then would need to halt the supersmart machine if it might indeed do harm. However, the scientists said it was impossible for any containment algorithm to simulate the AI’s behavior and predict with absolute certainty whether its actions might lead to harm. The algorithm could fail to correctly simulate the AI’s behavior or accurately predict the consequences of the AI’s actions and not recognize such failures. “Asimov’s first law of robotics has been proved to be incomputable,” Alfonseca says, “and therefore unfeasible.” We may not even know if we have created a superintelligent machine, the researchers say. This is a consequence of Rice’s theorem, which essentially states that one cannot in general figure anything out about what a computer program might output just by looking at the program, Alfonseca explains. On the other hand, there’s no need to spruce up the guest room for our future robot overlords quite yet. Three important caveats to the research still leave plenty of uncertainty to the group’s predictions.


Rethinking Active Directory security

A change made within on-premises Active Directory by an attacker can provide access to much more than just local resources. An attacker, can for example, make a compromised on-premises user account a member of a Sales group in Active Directory. This group likely would provide access to on-premises systems, applications, and critical data. But because Active Directory often federates with cloud applications via external IDP (e.g., Azure AD), it’s reasonable to assume that this same change in membership could allow access to a cloud-based CRM environment (like Salesforce), customer data (hopefully contained to the breached account, but more likely to the entire organizational data) and other resources. In many cyberattacks it’s more complex than the example above, where it’s necessary to gain elevated privileges via one account only to compromise a second, third, and so on, each time moving from system to system, or – in the case of a hybrid environment – from on-premises to cloud, leveraging access to on-premises Active Directory to specifically target accounts known to have access in the cloud.


The Great Compromise In AI’s Buy Vs Build Dilemma

Building AI in-house presents a variety of benefits. When done right, a built approach can lead to a stable, production-grade AI solution that is perfectly tailored to the specific needs and requirements of an industry or company. Digital natives have shown the impact of building AI from scratch. IBM is a prominent example of a business that has launched successful in-house AI into production. A recent report found IBM’s Watson Assistant AI paid itself back in just 6 months, with a three-year ROI of 337%. For digital adopters however, successfully building and implementing an AI solution in house is easier said than done without access to sizable capital and infrastructure. “When building an AI solution in-house, companies typically hire a team without significantly investing in the foundational elements that are required to stabilize AI in complex and dynamic environments,” suggests Nurit Cohen Inger, VP of Products at AI company BeyondMinds. “This approach, unfortunately, has typically meant a long and costly process to reach ROI positivity or in the worst case, never achieving production. Before developing AI solutions, businesses must heavily invest in solving the barriers that hold them back from turning proof of concepts into successful solutions in production.”


Training from the Back of the Room and Systems Thinking in Kanban Workshops

It’s very tempting to put everything you know on a training agenda, especially when you, as a trainer, feel that you have to know everything and constantly impress the learners. It’s always hard to chop workshop content into the bare minimum, especially when you have a lot of knowledge, experience, and fun stories to share. But if you are aiming for deep understanding and a lot of practice, less content translates into more value. Overloading groups with new information may lead to chaos during your class. They will struggle to understand which new tool or technique they should use first. In the end, they may just quit before they even start. ... Training From the Back of the Room (TBR) is a fresh approach to learning, training, presenting and facilitating that was developed by Sharon Bowman. It uses cognitive neuroscience and brain-based learning techniques to help learners to retain new information. TBR teaches you how to engage the five senses and keeps your learners active and engaged throughout the class. The concept is recognized internationally as one of the most effective frameworks for accelerated learning. It is a new way of teaching adults.


How COVID-19 accelerated a digital revolution in the insurance industry

The pandemic reminded us that we’re human. This experience has taught us compassion, grace, and the importance of both the health and wellbeing of ourselves and our families. COVID-19 has fundamentally reshaped the way we view protection products. In fact, two thirds (66%) of Americans say they now better understand life insurance’s value, with another quarter buying coverage for the first time. Awareness around the role of employers in providing access to these products has also increased. In a recent LIMRA study, one in four employees said they are more likely to sign up for certain benefits available through their employer. Along with this heightened awareness of our mortality and morbidity comes the realization that we thrive on human interaction. We can’t take a digital-only approach. Bringing emotion—positive emotion and empathy—to the experience and every interaction we have with customers will help us get farther, faster. As we continue to invest in technology across the insurance industry, we need to look for ways to make digital and human experiences work together for customers, employers, and financial professionals. Many of our customers tell us they don’t understand insurance products and they don’t know where to start educating themselves. 


7 Blindspots You Need to Uncover to Achieve Digital Banking Breakthrough

To explain the way that the “experience gap” might cause trouble, I'd like to share a real-life example. Several years ago a quite known and respectable Central European bank embarked on a voluminous digital transformation journey. The bank's application had a rating of 3.5 and was outdated. In order to digitalize, improve the bank's image and the competitive chances in the growing digital market, the management intended to urgently create and launch a modern looking banking application. Therefore, the initial design and development period was 6 months. Nevertheless, the bank spent three times as much time building the new application by themselves: 1 year and 8 months. This was a serious project not only in terms of time but also the budget invested. Judging by the scope of the project, the improvements made and the timeline, the overall costs could be estimated at around half a million. However, the result did not live up to expectations at all. After the new application was released it decreased to 2.4 from the previous 3.5 and has kept dropping even a year after its first release as it did not improve, but significantly worsened the customer experience.


Riding out the wave of disruption

Disruption is not necessarily the crisis it’s frequently considered to be for incumbents, the researchers stress. Two technologies can often coexist in the marketplace for a significant period. Thus, it’s important for incumbent companies not to overreact. They should target dual users and reexamine the factors that have led to the old technology sticking around for so long. Of course, the profit implications of cannibalization of the old technology and leapfrogging depend on which type of firm is trumpeting the new technology. New entrants will always stand to gain when they introduce a technology that takes off. But incumbents rolling out a successive technology will also gain if their competitors would have introduced it anyway or if the 2.0 version has a higher profit margin than the original. The authors write, “Leapfroggers are an opportunity loss for incumbents, but switchers are a real loss.” Regardless of the predictive model they use, marketers should strive to understand how the various consumer segments identified in this study will grow or shrink over time and use that information in their forecasts of early sales or market penetration of successive technologies.


Understanding the AI alignment problem

What’s worse is that machine learning models can’t tell right from wrong and make moral decisions. Whatever problem exists in a machine learning model’s training data will be reflected in the model’s behavior, often in nuanced and inconspicuous ways. For instance, in 2018, Amazon shut down a machine learning tool used in making hiring decisions because its decisions were biased against women. Obviously, none of the AI’s creators wanted the model to select candidates based on their gender. In this case, the model, which was trained on the company’s historical hiring data, reflected problems within Amazon itself. This is just one of the several cases where a machine learning model has picked up biases that existed in its training data and amplified them in its own unique ways. It is also a warning against trusting machine learning models that are trained on data we blindly collect from our own past behavior. “Modeling the world as it is is one thing. But as soon as you begin using that model, you are changing the world, in ways large and small. There is a broad assumption underlying many machine-learning models that the model itself will not change the reality it’s modeling. In almost all cases, this is false,” Christian writes.


Fixing the cracks in public sector digital infrastructure

First, there needs to be a government-wide, comprehensive digital skills strategy. One survey of industry professionals found that 40% of public sector organisations did not have the right skills to carry out digital transformation. Every member of the workforce needs to be able to perform basic tasks online. But to press forward with digital transformation, the government needs to champion digital leadership in the public sector – and that includes paying properly for those skills. The Government Digital Service recently advertised for a head of technology and architecture with a maximum salary of £70,887 a year. According to Google Jobs, typical pay for this type of work ranges from £65,000 to £180,000 in the private sector. This puts the public sector at a unique disadvantage and pay scales should be reviewed. ... Second, the Cabinet Office needs to address the gap between guidance and action on the ground. Out-of-date technology is widespread in some areas of the public sector, despite there being a large volume of information from central government on maintaining and updating digital infrastructure. Legacy IT has been holding digital public services back for years and will continue to do so unless there is a cross-government push to drive this forward.


Emotion Detection in Tech: It’s Complicated

Emotion detection would be a lot easier if humans expressed themselves in homogenous ways. However, cultural backgrounds and unique life experiences influence personal expression. Michelle Niedziela, VP of research and innovation at market research firm HCD Research, said advertisers and their agencies can get overly excited about the "happy" responses an ad drives when the response may have been a natural reflex. "If I smile at you, you innately smile back. So, one thing is are they really feeling happy or just projecting happy?" said Niedziela. "But also, how big does a smile have to be in order to be interpreted as happy?" Even cheap camera sensors are improving, but some of them may not be able to detect subtle nuances in facial geometry or provide the same degree of reliability among individuals who represent different races. Also, things that change an individual's appearance like hats, bangs or facial hair can negatively impact the accuracy of emotion sensing. "In my mind, the two biggest challenges are hardware quality and the models," said Capgemini's Simion. "You need to be very careful when you're talking about emotionality is the dataset you're going to use because if you're just going to call normal APIs from the cloud providers, that's not going to help much."



Quote for the day:

"To do great things is difficult; but to command great things is more difficult." -- Friedrich Nietzsche

Daily Tech Digest - January 18, 2021

Go back to the office? Some employees would rather quit

It's been difficult to get a good read on what the prevailing attitude is towards working from home since traditional workplaces shut in the early months of 2020. While the consensus largely appears to be that employees relish the flexibility and (subjective) comfort that working from home provides, this is at odds with the mental health spectre that has loomed over the COVID-19 crisis, with countless reports and statistics highlighting the toll that working in insolation has on wellbeing. This is captured in LiveCareer's survey. Despite 81% of employees saying they enjoyed working remotely, and 61% expressing a desire to continue working in a remote capacity after the pandemic is over, only 45% of those polled said that telecommuting had not taken a toll on their mental wellbeing. Clearly, the ideal situation is about balance: employees want the option to work remotely, while also having a shared workspace that they can use when needed. One major factor employers need to address to make the return to office life more appealing is safety. With many companies still trying to figure out how they can reconfigure or otherwise re-think their real estate investments to suit a new hybrid workforce, ensuring workplaces are safe should top the agenda.


AVIF Image Format: The Next-Gen Compression Codec

AVIF, or AV Image Format, is an open-source and royalty-free image format based on the AV1 codec, and ,similar to AV1, AVIF provides a very high compression rate. The fact that it's royalty-free makes it stand out from the competition. Leveraging the power of AV1 has proven beneficial for AVIF, in both processing time and its ability to handle hardware issues. Before we further discuss the advantages of AVIF, be advised that AVIF saves pictures in AVIF image format, which is relatively new and still not widely adopted. On top of this, it's using a reasonably new algorithm. So there may be a possibility that it’s not best for all use cases right now. ... The idea behind designing AV1 was to transmit video over the internet. With a better compression rate for video, AVI reduced the number of overall bits. This allows the AV1 codec to provide multiple coding techniques that gives developers some freedom when writing their code. If you wonder why we brought this concept of video compression technique into an image compression post, its because videos and image codecs share similarities in the nature of their data. The AV1 codec has proved very advantageous for the internet by saving bandwidth, which MPEG could not do, although JPEG XR was still in the race but not as effective as AV1.


Love in the time of algorithms: would you let your artificial intelligence choose your partner?

Another problematic consequence may be rising numbers of socially reclusive people who substitute technology for real human interaction. In Japan, this phenomenon (called “hikikomori”) is quite prevalent. At the same time, Japan has also experienced a severe decline in birth rates for decades. The National Institute of Population and Social Security Research predicts the population will fall from 127 million to about 88 million by 2065. Concerned by the declining birth rate, the Japanese government last month announced it would pour two billion yen (about A$25,000,000) into an AI-based matchmaking system. The debate on digital and robotic “love” is highly polarised, much like most major debates in the history of technology. Usually, consensus is reached somewhere in the middle. But in this debate, it seems the technology is advancing faster than we are approaching a consensus. Generally, the most constructive relationship a person can have with technology is one in which the person is in control, and the technology helps enhance their experiences. For technology to be in control is dehumanising.


How Teams Can Overcome the Security Challenges of Agile Web App Development

Managing company secrets in an agile environment means CISOs need to rethink the scalability of their current security solutions. With rapidly changing codebases, it’s essential that enterprises use security tools that support agile development and also extend to other platforms that devops teams might use. Akeyless is a versatile security tool that fragments encryption keys and provides a high degree of data security. It supports agile release environments and can be scaled to different platforms as needed. One of the reasons I like implementing this solution when consulting for app companies is how easily I can integrate it with all the major development platforms through plugins, ensuring that in-house departments and subcontractors alike can securely manage access to sandbox servers and databases, without interrupting their workflows. Beyond governance concerns, in my experience, compliance and audit teams generally stand to gain a great deal by learning about how automation can help them achieve their goals, along with how their protocols can improve with automation. On the other hand, complete automation might not be possible in every area. 


Multiple backdoors and vulnerabilities discovered in FiberHome routers

FTTH ONT stands for Fiber-to-the-Home Optical Network Terminal. These are special devices fitted at the end of optical fiber cables. Their role is to convert optical signals sent via fiber optics cables into classic Ethernet or wireless (WiFi) connections. FTTH ONT routers are usually installed in apartment buildings or inside the homes or businesses that opt for gigabit-type subscriptions. In a report published last week, security researcher Pierre Kim said he identified a large collection of security issues with FiberHome HG6245D and FiberHome RP2602, two FTTH ONT router models developed by Chinese company FiberHome Networks. The report describes both positive and negative issues with the two router models and their firmware. ... Furthermore, the Telnet management feature, which is often abused by botnets, is also disabled by default. However, Kim says that FiberHome engineers have apparently failed to activate these same protections for the routers' IPv6 interface. Kim notes that the device firewall is only active on the IPv4 interface and not on IPv6, allowing threat actors direct access to all of the router's internal services, as long as they know the IPv6 address to access the device.


How do I select a fraud detection solution for my business?

From strictly rules based to fully black box. The former gives you complete control but can be cumbersome and relies on a knowledgeable in house fraud team. The other end is perfect for extreme transaction volumes but offers little explanability. Fortunately, there is a middle ground with whitebox, supervised machine learning- you get the best of both worlds- granular rule based with machine learning making connections between disparate and complex data points. Fraud detection technologies should fit your business, not the other way around. Fraudsters evolve and find ingenious workarounds to most point solutions. Modern fraud detection is a “net” approach where the latest cutting edge tools are used in combination to make it very, very hard for a fraudster to fool them all. Results are very hard to predict. Try and select fraud technologies that allow you to test and show proof of value with no commitment, free trial periods. Modern, effective fraud tech should follow the best SAAS products where you see actual pricing, monthly contracts and free trials. Product value and risk should rest solely on the fraud detection partner.


The AI Incident Database wants to improve the safety of machine learning

“The goal of the AIID is to prevent intelligent systems from causing harm, or at least reduce their likelihood and severity,” McGregor says. McGregor points out that the behavior of traditional software is usually well understood, but modern machine learning systems cannot be completely described or exhaustively tested. Machine learning derives its behavior from its training data, and therefore, its behavior has the capacity to change in unintended ways as the underlying data changes over time. “These factors, combined with deep learning systems capability to enter into the unstructured world we inhabit means malfunctions are more likely, more complicated, and more dangerous,” McGregor says. Today, we have deep learning systems that can recognize objects and people in images, process audio data, and extract information from millions of text documents, in ways that were impossible with traditional, rule-based software, which expect data to be neatly structured in tabular format. This has enabled applying AI to the physical world, such as self-driving cars, security cameras, hospitals, and voice-enabled assistants. And all these new areas create new vectors for failure.


Chatbot Gone Awry Starts Conversations About AI Ethics in South Korea

Luda came under the national spotlight when it was reported that users were training Luda to spew hate speech against women, sexual minorities, foreigners, and people with disabilities. Screengrabs show Luda saying, “they give me the creeps, and it’s repulsive” or “they look disgusting,” when asked about “lesbians” and “black people,” respectively. Further, it was discovered that groups of users in certain online communities were training Luda to respond to sexual commands, which provoked intense discussions about sexual harassment in a society that already grapples with gender issues. Accusations of personal data mishandling by ScatterLab emerged as Luda continued to draw nationwide attention. Users of Science of Love have complained that they were not aware that their private conversations would be used in this manner, and it was also shown that Luda was responding with random names, addresses, and bank account numbers from the dataset. ScatterLab had even uploaded a training model of Luda on GitHub, which included data that exposed personal information. Users of Science of Love are preparing for a class-action lawsuit against ScatterLab, and the Personal Information Protection Commission, a government watchdog, opened an investigation on ScatterLab to determine whether it violated the Personal Information Protection Act.


Digital transformation: it has never been more relevant for businesses

As it’s not necessarily a tangible metric, it can be hard to measure the return on investment (ROI) of the transformation journey and its success. However, numerous considerations can help determine what the ROI is. Firstly, setting out objectives for transformation – it could be to improve the customer experience, the company’s infrastructure or staff productivity, for example. Secondly, outlining the costs of implementing the transformation strategy is essential – as is knowing what the outcomes of that financial outlay are. This will provide a reference point and clear performance indicators when measuring ROI. Of course, setting realistic goals is important in the first place; stage one of the journey, discovering and assessing, should provide guidance on setting achievable targets. And when implementing new systems, there are different metrics that can be detailed in order to measure their success. For instance, if trying to improve end user experience, tackling common pain points experienced by external parties such as slow load times and application response will help reach the overall goal. If IT systems offer a rapid response, end users won’t feel frustrated by the operating system.


Do you really want a CEO to be a role model?

It’s likely that the effectiveness of role models is rooted in mirror neurons, specialized cells that are located in several areas of the human brain. They were first identified about 30 years ago when neuroscientists who had implanted electrodes in monkeys to study how their brains generated hand movements suddenly realized that the same neurons were firing when the monkeys ate and when the monkeys watched the scientists eat. Since then, some researchers have come to see mirror neurons as the biological mechanism through which humans unconsciously copy the behaviors of others. That conclusion would lend scientific credence to the advice that Sutton got from his dad: Being a jerk can be contagious. The work of sociologist Robert K. Merton offers a clue to avoiding an infection of negative behaviors. Merton made a distinction between role models (a term he coined in the 1950s) and reference individuals. He said that when a person emulates a reference individual, he or she copies that person’s good and bad behavioral traits and values without discrimination. But when a person emulates a role model, the focus is on a more limited segment of behaviors and values. This suggests that you can act like Elon Musk, the entrepreneurial innovator, without becoming Musk, the blurting tweeter.



Quote for the day:

"If one oversteps the bounds of moderation, the greatest pleasures cease to please." -- Epictetus