Daily Tech Digest - May 07, 2021

Four Things to Do if Your Bank is Eyeing Digital Assets

The evolution of money toward digital assets is affecting bank and fintech organizations globally. Companies should proactively think through adjustments now that will enable them to keep up with this rapid pace of change. At the start of this century, when mobile banking apps first began appearing and banks started offering remote deposit captures for checks, organizations that were slow to adopt these technologies wound up being left behind. The OCC guidance explicitly authorizing the use of digital assets should alleviate any doubts around whether such currencies will be a major disruption. ... A crucial determinant in how successful a bank will be in deploying digital asset-related services is how well-equipped and properly aligned its technology platforms, vendors, policies and procedures are. One of the primary concerns for traditional banks will be assessing their existing core banking platform; many leading vendors do not have blockchain and digital asset capabilities available at this time. This type of readiness is key if bank management hopes to avoid significant technology debt into the next decade.


How do Decision Trees and Random Forests Work?

There are two types of decision trees: classification and regression. A classification tree predicts the category of a categoric dependent variable — yes/no, apple/orange, died/survived, etc. A regression tree predicts the value of a numeric variable, similar to linear regression. The thing to watch out for with regression trees is that they can not extrapolate outside of the range of the training dataset like linear regression can. However, regression trees can use categoric input variables directly, unlike linear regression. While the Titanic decision tree shows binary splits (each non-leaf node produces two child nodes), this is not a general requirement. Depending on the decision tree, nodes may have three or even more child nodes. I’m going to focus on classification decision trees for the rest of this article, but the basic idea is the same for regression trees as for classification trees. Finally, I’ll mention that this discussion assumes the use of the rpart() function in R. I’ve heard that Python can’t handle categoric variables directly, but I’m much less familiar with Python, especially for data analysis. I believe that the basic theory is the same, but the implementation is different.


Why financial-services firms need to change with the times

Rapidly evolving technology, regulatory constraints, and relentless pressure to hit short-term financial targets may be hindering firms from making needed investments to upskill their employees. These employees also face critical skills gaps in areas such as empathy, resilience, adaptability, and creative problem-solving. Turnover is a factor as well — firms may resist investing in bespoke training initiatives that increase the market value of their people, who then leave and take their enhanced skills profile with them. Such programs are expensive and have an uncertain ROI. ... The challenge to upskill so many people is so significant that firms may not be able to solve it by working independently — though many have started that journey. For example, in 2017, Citigroup announced a partnership with Cornell Tech to develop digital talent in the New York City labor market. But a market-based, go-it-alone approach may be too slow, or risk leaving small firms behind. It behooves industry-wide associations and trade groups to create the right foundation to help all firms in a country to close the skills gap, leading to faster progress at a sector level.


The Rise and Rise of Digital Banking: How Fintech is Set to Disrupt Brick and Mortar Banking

Industry insiders have long been concerned about the role fintech have been playing in the world of banking and whether or not they will ultimately replace traditional financial institutions. This fear was exacerbated by the recent introduction of the People’s Bank of China Fintech Development Plan which looked to accelerate the accommodation of digital financial services in the country. But could fintechs actually spell the end of traditional banking? To address this properly, let’s address what finance actually is. The purpose of finance is to realise the optimal distribution of capital across time and space amid uncertainties and to serve the real economy and maximise social utility. One big barrier to this can be found in adverse selection through a lack of information and the emergence of ethical issues. Finance should exist to identify and price risks. All technologies that are developed should be intent on helping to better understand customers and their willingness, and ability, to pay – while pricing them accurately. With this in mind, traditional banks have an advantage in terms of capital costs, while fintechs are competitive in terms of operating costs.


Quantum computing could be useful faster than anyone expected

For most scientists, a quantum computer that can solve large-scale business problems is still a prospect that belongs to the distant future, and one that won't be realized for at least another decade. But now researchers from US banking giant Goldman Sachs and quantum computing company QC Ware have designed new quantum algorithms that they say could significantly boost the efficiency of some critical financial operations – on hardware that might be available in only five years' time. Rather than waiting for a fully-fledged quantum computer, bankers could start running the new algorithms on near-term quantum hardware and reap the benefits of the technology even while quantum devices remain immature. Goldman Sachs has, for many years, been digging into the potential that quantum technologies have to disrupt the financial sector. In particular, the bank's researchers have explored ways to use quantum computing to optimize what is known as Monte Carlo simulations, which consist of pricing financial assets based on how the price of other related assets change over time, and therefore accounting for the risk that is inherent to different options, stocks, currencies and commodities.


Cloud Native and Kubernetes Observability: Expert Panel

The concept of observability is really agnostic to where you’re running your workload, but the added complexity of multi-tenancy, cloud-native workloads, and containerization lead to a rising need for observability. Single-tenant monoliths can be easier to make observable because all the functionality is right there, but as you add more services and users there’s a chance that a bug will only manifest for one particular combination of services, versions of those services, and user traffic patterns. The most important thing to be aware of is when you’re about to grow your previous solutions, and to be proactive about adding the right instrumentation and analysis frameworks to achieve observability before it’s too late. When you stop being able to understand the blast radius each change will have, and when you stop being able to answer the questions you have about your system because the underlying data has been aggregated away…that’s the point at which it’s too late. So be proactive and invest early in observability to both improve developer productivity and decrease downtime.


How To Take Full Advantage Of GPUs In Large Language Models

Typically, training models use weak scaling approaches and distributed data parallelism to scale training batch size with a number of GPUs. Though this approach allows the model to train on larger datasets, it comes with a trade-off; all parameters must fit on a single GPU. This is where parallelism comes into picture. Model parallel training overcomes this limitation as it partitions the model across multiple GPUs. Previously, general purpose model parallel frameworks such as GPipe and Mesh-TensorFlow have been proposed for the same purpose. While gPipe divides groups of layers across different processors, Mesh-TensorFlow employs intra-layer model parallelism. Other methods of model parallelism such as tensor and pipeline parallelism have been proposed too. Unfortunately, wrote the researchers at NVIDIA, naive usage leads to fundamental scaling issues at thousands of GPUs. Expensive cross-node communication or idle periods waiting on other devices are few reasons. Moreover, the high number of compute operations required can result in unrealistically long training times without model parallelism.


Optimal Feature Discovery: Better, Leaner Machine Learning Models Through Information Theory

From the perspective of information theory, both the prediction target and the features in a model are random variables, and it’s possible to quantify in bits the amount of information provided about the target by one or more features. One important concept is relevance, a measure of how much information we expect to gain about the target by observing the value of the feature. Another important concept is redundance, a measure of how much information is shared between one feature and another. Going back to the coin flip example, there could be different ways to obtain information about the bias of the coin. We could have access to a feature that tells us the rate of heads based on the design of the coin, or we could build a profile feature that tracks the number of heads and tails, historically. Both features are equally relevant in that they provide equal amounts of information, but observing both features doesn’t give us more information than observing either one, hence they are mutually redundant.


There’s a revolution coming in voice profiling and the warning signs are loud and clear

When conducting research for my forthcoming book, The Voice Catchers: How Marketers Listen In to Exploit Your Feelings, Your Privacy, and Your Wallet, I went through over 1,000 trade magazine and news articles on the companies connected to various forms of voice profiling. I examined hundreds of pages of US and EU laws applying to biometric surveillance. I analysed dozens of patents. And because so much about this industry is evolving, I spoke to 43 people who are working to shape it. It soon became clear to me that we are in the early stages of a voice-profiling revolution that companies see as integral to the future of marketing. Thanks to the public’s embrace of smart speakers, intelligent car displays and voice-responsive phones – along with the rise of voice intelligence in call centres – marketers say they are on the verge of being able to use AI-assisted vocal analysis technology to achieve unprecedented insights into shoppers’ identities and inclinations. In doing so, they believe they will be able to circumvent the errors and fraud associated with traditional targeted advertising.


Linux Foundation launches open source agriculture infrastructure project

The Linux Foundation has lifted the lid on a new open source digital infrastructure project aimed at the agriculture industry. The AgStack Foundation, as the new project will be known, is designed to foster collaboration among all key stakeholders in the global agriculture space, spanning private business, governments, and academia. As with just about every other industry in recent years, there has been a growing digital transformation across the agriculture sector that has ushered in new connected devices for farmers and myriad AI and automated tools to optimize crop growth and circumvent critical obstacles, such as labor shortages. Open source technologies bring the added benefit of data and tools that any party can reuse for free, lowering the barrier to entry and helping keep companies from getting locked into proprietary software operated by a handful of big players. ... The AgStack Foundation will be focused on supporting the creation and maintenance of free and sector-specific digital infrastructure for both applications and the associated data.



Quote for the day:

"Leadership appears to be the art of getting others to want to do something you are convinced should be done." -- Vance Packard

Daily Tech Digest - May 06, 2021

What’s the hype in hyperautomation?

Gartner predicted that hyperautomation would be one of the top strategic technology trends from 2020 onwards, but that doesn’t necessarily mean that manufacturers must buy into the hype. The first thing to do to understand whether end-to-end automation can deliver substantial business value, is to create a roadmap that clearly aligns business goals with the automation tools needed to reach them. Gartner suggests considering three key objectives — revenue, costs and risks. According to these parameters, manufacturers might want to think about which technologies can drive revenue by enhancing customer engagement, increasing output, and automating repetitive tasks. They should then redesign processes to reduce the cost of poor quality and streamline production. Finally, they might need to consider the compliance risks of inefficient processes — for example, feeding parts to a machine manually might be not only inefficient but also risky, and it might therefore violate safety regulations. A careful consideration of these factors should give manufacturers a clearer idea of whether or not hyperautomation can significantly boost productivity and give them the competitive edge they might be lacking without it.


How to Secure the Connected & Automated Mobility (CAM) Ecosystem

Under a new regulation set by the United Nations, car manufacturers are required to secure vehicles against cyberattacks. With the upcoming transposition of the United Nations' regulations into EU policy, the new regulation on cybersecurity will be mandatory in the European Union for all new vehicle types from July 2022 and will become mandatory for all new vehicles produced, regardless of the type, from July 2024. It is important to remember that the UNECE Regulations and related ISO standards do not apply to all CAM stakeholders. The types of vehicles the regulation applies to include passenger cars, vans, trucks and buses, light four-wheeler vehicles if equipped with automated driving functionalities from level 3 onwards. The report is intended to support the work of the European Commission and the EU Member States’ competent authorities in the transposition of the UN cybersecurity regulation into EU policy. Decision-makers who are responsible for the protection of security and resilience of the CAM ecosystem at EU level will find in the report the relevant cybersecurity measures and key challenges they need to consider to draft their cybersecurity baseline.


Security probe of Qualcomm MSM data services

Mobile Station Modem (MSM) is an ongoing series of a 2G/3G/4G/5G-capable system on chips (SoC) designed by Qualcomm starting in the early 1990s. MSM has always been and will be a popular target for security research because hackers want to find a way to attack a mobile device remotely just by sending it a SMS or crafted radio packet. But 3GPP protocols are not the only entry point into the modem. Android also has an ability to communicate with the modem processor through the Qualcomm MSM Interface (QMI). MSM is managed by the Qualcomm real-time OS (QuRT) that cannot be debugged or dumped even on rooted Android devices. QuRT’s integrity is ensured by the TrustZone. There is only one possible way to dynamically probe the modem, namely to use a vulnerability. There have been several successful attempts to patch the QuRT by exploiting vulnerabilities in the Qualcomm Trusted Execution Environment (QTEE) or Linux-kernel. The latest compromised SoC is MSM8998 (Pixel 2). In our research, we fuzzed MSM data services so we could find a way to patch QuRT on modern SoCs directly from Android.


5 Ways to Be a Leader Your Employees Will Respect

Transparent communication is the ability to have open, sincere and direct dialogue. But there's a fine line between transparency and rudeness. Keep empathy at the top of your mind. When done right, transparent communication is one of the foundational practices to showing inclusive leadership, and your new hire will appreciate you for it. It's important to remember that your words matter, especially when you're in a position of power. So assess how your employees react to your messaging and continually adjust language and tone as needed. Everybody has different communication preferences, so keep a close eye on what you say and how. Flexibility is the model of the future. We all learned the importance of flexible workplace systems this past year, which will continue in our post-pandemic world. Flexible schedules in hybrid or remote-work situations allow employees who are also caregivers the space in their day to ensure family comes first. Companies like Google and Facebook have already announced hybrid-style workplaces starting in the fall of 2021. Companies like Slack and Twitter say employees never need to return to the office. 


Emerging open cloud security framework has backing of Microsoft, Google and IBM

While they’ve pulled in some of the big cloud vendors, they’ve also got large companies who consume cloud services like FedEx, Pfizer and Goldman Sachs. Conspicuously missing from the group is AWS, the biggest player in the cloud infrastructure market by far. But Lippis says that he hopes, as the project matures, other companies including AWS will join. “There’s lots of security programs and industry programs that get out there and that people are asking them to join, and so some companies want to wait to see how well this pans out [before making a commitment to it],” Lippis said. His hope is, that over time, Amazon will come around and join the group, but in the meantime they are working to get to the point where everyone in the community will feel good about what they’re doing. The idea is to start with security alerts and find a way to build a common format to give companies the same kind of system they have in the data center to track security alerts in the cloud. The way they hope to do that is with this open dialogue between the cloud vendors and the companies involved with the group.


How to apply a Zero Trust approach to your IoT solutions

Securing IoT devices presents a couple of additional layers of complexity because of the incredible diversity in design, hardware, operating systems, deployment locations, and more. For example, many are “user-less” and run automated workloads, presenting challenges when integrating into existing identity and access management tools. Many IoT devices have also been deployed using infrastructure and equipment not originally designed for a connected world or have limited capabilities and connectivity, making them challenging to secure. And because IoT devices are typically deployed in diverse environments—ranging from inside factories or office buildings to remote worksites or critical infrastructure—they’re exposed in unique ways and can offer high-value targets to attackers. ... Securing IoT solutions with a Zero Trust security model starts with non-IoT specific requirements—specifically ensuring you have implemented the basics to securing identities, their devices, and limit their access. These include explicitly verifying users, having visibility into the devices they’re bringing on to the network, and being able to make dynamic access decisions using real-time risk detections.


IQ tests: are humans getting smarter?

From the algorithms that make our social media accounts function to the sleep-tracking technology in our smartwatches, the world has never seemed so technologically advanced and developed. Which is why it would be easy to assume that with each generation, humans are getting smarter. But is this the case? It’s a question many scientists have pondered, particularly so given that throughout the 20th century the average score on IQ tests around the world increased significantly – especially in the west. This increase was around three IQ points per decade – meaning we are technically living with more geniuses on the planet than ever before. This increase in IQ scores and the seeming tendency for intelligence levels to increase over time is known as the Flynn effect (named after the late US-born educator, James Flynn). And improvements in health and nutrition, better education and working conditions, along with recent access to technology have all contributed. Indeed, in the 19th century, for example, industrialisation created large overcrowded cities with poor health outcomes and premature death.


10 digital transformation metrics to measure success in 2021

Metrics tied to business case realization have or are morphing into continuous value realization. On-time/on-budget delivery metrics are evolving into measuring flexibility in adjusting to scope. “Before COVID, the world was already contemplating a move from project-thinking to product-thinking,” says Kelker, whose firm is also tracking continuous value realization across 400 companies. “The pandemic has hastened this – we have multiple clients who are now interested in aligning, earmarking, allocating, and spending budgets in line with the product-aligned agile delivery.” ... Matching revenue to specific marketing efforts, for example, will be important. “This includes the gambit of how digital transformation investments can help reduce customer churn, enhance customer acquisition, and improve the brand experience,” says Nitish Mittal, vice president at Everest Group. “For instance, in retail, providing a frictionless direct-to-consumer commerce experience is a key imperative.” For some businesses, determining how technology innovation generally is impacting revenues will be helpful. “In all of my conversations with CEOs, CDOs, and CMOs, they are pivoting away from cost containment and back toward capturing market share and providing value,” says Belliappa.


Global Phishing Attacks Spawn Three New Malware Strains

The malware ecosystem used by UNC2529 consists of either a downloader (Doubledrag) or an Excel document with an embedded macro; a dropper (Doubledrop); and a backdoor (Doubleback). The infection starts with phishing emails that are rigged with a link to download a malicious payload that contains a JavaScript downloader with code that’s heavily obfuscated in order to evade analysis. Once it’s executed, Doubledrag tries to download a dropper – Doubledrop – in the second stage of the attack chain. Doubledrop is an obfuscated PowerShell script designed to plant a backdoor into memory. It has two flavors: a 32-bit and a 64-bit instance of the Doubleback backdoor. With all that set up, the backdoor gets to work inserting plugins and reporting back to its controllers. “The backdoor, once it has the execution control, loads its plugins and then enters a communication loop, fetching commands from its C2 server and dispatching them,” Mandiant describes. “One interesting fact about the whole ecosystem is that only the downloader exists in the file system. The rest of the components are serialized in the registry database, which makes their detection somewhat harder, especially by file-based antivirus engines.”


The ethical implications of chatbots

The most reasonable of the motives above is arguably point two. The research related to this patent could be hugely beneficial in ways we haven’t even conceived of yet. However, the unforeseen has no moral compass and the fact is that it could also be used for nefarious purposes outside of the original intent of its creators. You might think I have seen one too many Hollywood movies. However, if you think of the already burgeoning list of morally ambiguous ways in which technology is used, it is not so much science fiction, as it is science fact. Just one example is bias in Artificial Intelligence (AI) algorithms. While AI can help doctors interpret test results more accurately in certain instances, these algorithms are trained on historical data that has our inherent human biases built in. So when studies such as this show that black patients would have gotten better life-saving medical care “if their kidney function had been estimated using the same formula as for white patients”, we should probably take notice and do something about it. In addition to simply creating your digital zombie doppelganger, does this technology potentially open the pandora’s box for identity theft?



Quote for the day:

"It is easy to lead from the front when there are no obstacles before you, the true colors of a leader are exposed when placed under fire." -- Mark W. Boyer

Daily Tech Digest - May 05, 2021

Why Every Developer Should Start Programming With C

Almost all programs offer built-in methods or library methods to solve coding problems that need logical algorithms. For example, if you need to copy specific items into a second array, you can use the built-in filter method in JavaScript. If you’re using Java, you may use the filter method from the java.util.stream package. Literally, any popular programming language has a built-in method or library method to filter an array. But if you’re using C, you have to implement it yourselves — because there’s no built-in or library filter method in C. When you find those scenarios, you’ll face problems that need to be solved. Practicing algorithmic questions is a great way to become a good problem solver. We don’t always work with straightforward tasks that involve only the standard library and built-in features of your favorite enterprise programming language. We often work with tasks involved with problem-solving skills. So, writing your initial codes with C makes you a better problem solver. Moreover, developers who participate in competitive programming hackathons often use C to solve problems.


Communication Design for Avoiding Uncertainty

Cultural diversity, if not handled well, can be a source of inefficiency, confusion, frustration, anxiety, and stress at work. When we don’t appreciate the cultural differences in how others convey and interpret information, we are more likely to misunderstand them. What happens when people from other countries and cultures don’t behave according to our cultural norms? We fall for the fundamental attribution error and attribute their behavior to their personality, assuming their behavior is representative of who they are. We also find it convenient to go with cultural stereotypes without making an attempt to understand how culture plays a role in their style of communication. Techniques that made us successful with a group of people from one country may not work with another group from a different country. To be collaborative, we need to embrace cultural diversity and open ourselves to learning different communication approaches and making adjustments along the way. We need to learn, unlearn, and relearn new strategies to communicate effectively with different groups of people.


Multi-Gov Task Force Plans to Take Down the Ransomware Economy

“Ransomware attackers require little risk or effort to launch attacks, so a prohibition on ransom payments would not necessarily lead them to move into other areas,” according to the report. “Rather, they would likely continue to mount attacks and test the resolve of both victim organizations and their regulatory authorities. To apply additional pressure, they would target organizations considered more essential to society, such as healthcare providers, local governments and other custodians of critical infrastructure.” So instead, “Updating breach disclosure laws to include a ransom-payment disclosure requirement would help increase the understanding of the scope and scale of the crime, allow for better estimates of the societal impact of these payments, and enable better targeting of disruption activities.” The Framework would require ransomware victims to report details about the incident prior to paying the ransom. ... As a corollary to this, the Framework would also have cyber-insurance companies establish a common pool of money “to evaluate and pursue strategies aimed at restitution, recovery or civil asset seizures, on behalf of victims and in conjunction with law-enforcement efforts.”


Why a Serverless Data API Might be Your Next Database

DBaaS hasn’t historically been considered serverless, but that has been changing. An increasing number of public cloud services are either offering serverless alternatives, or moving to this architecture altogether. As one might expect, stateless services like compute are a bit more straightforward to make serverless, relative to stateful ones. It’s only recently that databases and data services have begun to move in this direction. As you might expect, a serverless DBaaS scales to meet the immediate needs of the application it is servicing. When more demand is high, it scales up the number of nodes to meet the need, and when demand drops, it scales back down. This is particularly useful in microservice, FaaS and other distributed architectures. Right-sizing your database in concert with the microservice fleet in those scenarios is tedious, often resulting in paging already busy ops people. You are basically only paying by the operation instead of being charged for idle instances of “always on” virtual hardware. Read and write, storage and network out - done. This simplicity can result in tremendous idle time savings and a dramatically more cost-effective profile for everything from dev/test to production.


How modern workflows can benefit from pentesting

One of the things you’re going to need, especially in a tech company, you’re going to need a program that is unique to the company and takes into consideration the customers, the attack space they live in, the tech stack they’re using and the unique challenges they have. There is of course a standard menu that we each bring in our back pocket of things you want to make sure you’re checking off the list. When we dig into the application security space you think about the people you want to hire, at what level do they need to be, do they need a coding background, are they comfortable with developers, counseling and teaching developers how to code securely, etc. So, you have this people component and a teaching component. There is also an operational rigor that the public and customers expect. It’s great that you do this internally but what does a third party say about your program and how effective your program is. And while you’re building up these processes and you build out your application security department you have these engineers working with engineers all over the company, DevOps, infrastructure, product engineers


What Exactly Is Artificial Intelligence? (Hint: It’s All About The Datasets)

Datasets for machine learning are the main commodity in the world right now. Everybody is talking about AI and AI applications but a few are focusing on how accurate the data is and if the data is actually correct. Data collection needs to be deliberate—the success of its intended application depends on it. As those in data science know, datasets are necessary to build a machine learning project. The dataset is used to train the machine learning model and is an integral part of creating an efficient and accurate system. If your dataset is noise-free (noisy data is meaningless or corrupt) and standard, your system will be more reliable. But the most critical part is identifying datasets that are relevant to your project. So your company has decided to make the jump into data science and needs to collect data. But if you don't have any, where do you start? The answer is twofold. One option is to rely on open source datasets. Companies like Google, Amazon, and Twitter have a ton of data they’re willing to give away. And many online sites dedicated to AI and AI applications have compiled free categorized lists which make finding a good dataset even easier.


How to Recruit AI Talent and Keep Them Happy

AI professionals tend to be concentrated geographically in tech hubs and populous areas. While this may change as remote work becomes more prevalent post-pandemic, organizations located in large cities will likely have an easier time finding talent. AI talent is currently most prevalent in certain industries, including high tech, banking and financial services, manufacturing, healthcare, and retail, so organizations in these sectors will likely see more applicants for open roles. When evaluating candidates for AI roles, be wary of padded resumes. As AI is still an emerging field, there are many so-called “experts” who overstate their skills, experience, credentials, education and more. It’s also important to look beyond a candidate’s technical expertise and assess their soft skills, such as business acumen, communication skills and leadership abilities. Keep in mind that elite AI talent is getting recruited all the time, and it can be a challenge to build sustainable AI initiatives when key team members leave. Ensure candidates’ previous job tenure is compatible with your organization’s strategy and rely on the interview to make sure they’re a cultural fit.


NIST Seeks Input on HIPAA Security Rule Guidance Update

Some security experts are debating whether it's time to update the HIPAA Security Rule itself - and not just the NIST guidance. "The HIPAA Security Rule is a very process-oriented rule, by intent," says privacy attorney Kirk Nahra of the law firm WilmerHale. "It addresses ways to think about and approach security, rather than identify specific standards to follow. That means that, from my perspective, it is in many ways a perfect rule that does not need to be updated in its language - the [compliance] process must be updated regularly by any covered entity or business associate, but that 'updating' is already incorporated into the rule." NIST is trying to give organizations "a way to turn the HIPAA process into reality - to move from process to substance," with updated guidance, he contends. If HHS were to consider changes to the HIPAA Security Rule, "I would only caution them as they move through the process of evaluating potential changes to keep the idea of the HIPAA Security Rule as it is, and not to turn a broad process that is flexible and scalable to adjust to the wide volume of different kinds of entities regulated by HIPAA into something more specific and less flexible," Nahra says.


What are the benefits of cognitive automation?

Although much of the hype around cognitive automation has focused on business processes, there are also significant benefits of cognitive automation that have to do with enhanced IT automation. "Cognitive automation can be the differentiator and value-add CIOs need to meet and even exceed heightened expectations in today's enterprise environment," said Ali Siddiqui, chief product officer at BMC. In this domain, cognitive automation is benefiting from improvements in AI for ITSM and in using natural language processing to automate trouble ticket resolution. Cognitive automation could also help detect and solve problems buried deep within an enterprise that could go undetected until a problem arises and then takes up the bulk of IT's time to resolve, such as a critical system bug, site outage or a potential security threat. Instead of having to deal with back-end issues handled by RPA and intelligent automation, IT can focus on tasks that require more critical thinking, including the complexities involved with remote work or scaling their enterprises as their company grows.


Will 2021 Mark the End of World Password Day?

Technology hasn't yet evolved to a point where we can do away with passwords altogether. Instead, we keep inventing ways of making passwords more secure, propping them up as a viable way in which to secure our data. Two-step authentication does exactly what it sounds like, requiring an additional step in the login process beyond simply entering a password. Once a user has entered the password, that person will be sent a text message with a unique code or be asked to generate one via an authenticator app, which is needed to gain access to their account. This kind of multifactor authentication certainly offers an additional layer of security. It means that even if hackers crack your password, they aren't going to get very far without your mobile phone or access to your code generator. However, it's not entirely without flaws. For one, it makes the login process extremely tedious for the user, requiring additional hoops to jump through. It also creates an unwanted dependency on third parties, such as mobile service providers. What happens when a user is unable to receive their authenticator code via SMS because they're out of signal range or their operator's network goes down?



Quote for the day:

"If you don't understand that you work for your mislabeled 'subordinates,' then you know nothing of leadership. You know only tyranny." -- Dee Hock

Daily Tech Digest - May 04, 2021

Why Is There A Shortage Of MLOps Engineers?

MLOps and DevOps engineers require different skill sets. Firstly, developing machine learning models do not need a software engineering background as the focus is mainly on the proof of concept/prototyping. Secondly, MLOps are more experimental in nature compared to DevOps. MLOps calls for tracking different experiments, feature engineering steps, model parameters, metrics, etc. MLOps is not limited to unit testing. Various parameters need to be considered, including data checks, model drift, analysing model performance, etc. Deploying machine learning models is easier said than done as it involves various steps, including data processing, feature engineering, model training, model registry and model deployment. Lastly, MLOps engineers are expected to track data distribution with time to ensure the production environment is consistent with the data it is being trained on. Last year, AI/ML research hit the doldrums in the wake of the pandemic; tech giants like Google slowed down hiring AI researchers and ML engineers, and Uber laid off their AI research and engineering team.


AI security risk assessment using Counterfit

The tool comes preloaded with published attack algorithms that can be used to bootstrap red team operations to evade and steal AI models. Since attacking AI systems also involves elements of traditional exploitation, security professionals can use the target interface and built-in cmd2 scripting engine to hook into Counterfit from existing offensive tools. Additionally, the target interface can allow for granular control over network traffic. We recommend using Counterfit alongside Adversarial ML Threat Matrix, which is an ATT&CK style framework released by MITRE and Microsoft for security analysts to orient to threats against AI systems. ... The tool can help scan AI models using published attack algorithms. Security professionals can use the defaults, set random parameters, or customize them for broad vulnerability coverage of an AI model. Organizations with multiple models in their AI system can use Counterfit’s built-in automation to scan at scale. Optionally, Counterfit enables organizations to scan AI systems with relevant attacks any number of times to create baselines. Running this system regularly, as vulnerabilities are addressed, also helps to measure ongoing progress toward securing AI systems. 


New Attacks Slaughter All Spectre Defenses

The findings are going to obliterate a pile of work done by those who’ve been working hard to fix Spectre, the team says. “Since Spectre was discovered, the world’s most talented computer scientists from industry and academia have worked on software patches and hardware defenses, confident they’ve been able to protect the most vulnerable points in the speculative execution process without slowing down computing speeds too much. They will have to go back to the drawing board,” according to UVA’s writeup. The new lines of attack demolish current defenses because they only protect the processor in a later stage of speculative execution. The team was led by UVA Engineering Assistant Professor of Computer Science Ashish Venkat, who picked apart Intel’s suggested defense against Spectre, which is called LFENCE. That defense tucks sensitive code into a waiting area until the security checks are executed, and only then is the sensitive code allowed to execute, he explained. “But it turns out the walls of this waiting area have ears, which our attack exploits. We show how an attacker can smuggle secrets through the micro-op cache by using it as a covert channel.”


Drake: Model-based design in the age of robotics and machine learning

The Drake developers have a philosophy of rigorous test-driven development. The governing equations for multibody physics are well known, but there are often bugs in a complex engine like this. If you scan the codebase, you will find unit tests that contain comparisons with closed-form solutions for nontrivial mechanics problems like a tumbling satellite, countless checks on energy conservation, and many other checks that help the rest of the team focus on manipulation with the confidence that the multibody models are implemented correctly. Importantly, this dynamics engine is not only for simulation. It is also built for optimization and for control. The exact same equations used for simulation can be used to compute forward or inverse kinematics and Jacobians. They can also be used for more complex queries like the gradient of an object’s center of mass. We provide smooth gradients for optimization whenever they are available (even through contact). Drake also supports symbolic computation, which is very useful for structured optimization and for use cases like automatically extracting the famous “lumped parameters” for parameter estimation directly from the physics engine.


How to lead a digital transformation — ethically

Not all ethical imperatives related to digital transformation are as debatable as the suggestion that it should be people-first; some are much more black and white, like the fact that you have to start somewhere to get anywhere. Luckily, “somewhere” doesn’t have to be from scratch. Government, risk and compliance (GRC) standards can be used to create a highly structured framework that’s mostly closed to interpretation and provides a solid foundation for building out and adopting digital solutions. The utility of GRC models applies equally to startup multinationals and offers more than just a playbook; thoughtful application of GRC standards can also help with leadership evaluation, progress reports and risk analysis. Think of it like using bowling bumpers — they won’t guarantee you roll a strike, but they’ll definitely keep the ball out of the gutter. Of course, a given company might not know how to create a GRC-based framework (just like most of us would be at a loss if tasked with building a set of bowling bumpers). This is why many turn to providers like IBM OpenPages, COBIT and ITIL for prefab foundations.


Use longitudinal learning to reduce risky user behavior

Longitudinal learning is a teaching method that is gaining traction within academia, particularly for corporate training. This continuing education approach involves administering shorter assessments of specific content (such as whether to click on a URL embedded within an email sent by an unknown user) repeatedly over time. Through a consistent assessment process, security concepts and information are reinforced so that knowledge is retained and accumulated gradually. Studies on longitudinal learning in healthcare showed that testing medical students in combination with explaining the information is the most effective way to drive the long-term retention of information. Consistent, repetitive lessons are critical to help employees overcome the cognitive biases that cybercriminals count on to execute their attacks. The human mind is stingy; that is to say, that the brain processes so much information daily that it is constantly trying to take shortcuts to save energy and enable multi-tasking. Cybercriminals know this which is why impersonation attacks, phishing, and rnalicious URLs are so effective. Did you catch the typo in the last sentence? If not, look at the word “malicious” again.


This ambitious Microsoft project aims to fix cloud computing security

A Microsoft research project, Project Freta, aims to change that, providing tools to identify malware running on virtual machines in the cloud. It takes an economic approach to managing malware, which is only valuable to bad actors as long as it's undetected: once identified on one system, malware code is no longer reusable, as its signature can be added to active scanning tools. But if we're to have any success, we need to be able to scan many thousands of devices, at a push of a button. The very industrial scale of the cloud means that traditional scanning techniques are too slow, looking for one or two compromised images in an ever-growing fleet. It's a reminder of that old Cold War adage: your attackers only have to be lucky once, you have to be lucky every time. Microsoft Research's security specialists have been thinking about this problem, and Project Freta encapsulates much of this thinking in a cloud-centric proof-of-concept. Designed to look for in-memory malware, it provides a portal where you can scan memory snapshots from Linux and Windows virtual machines. Initially focusing on virtual machine instances, it's intended to show the techniques and tools that can be used to scan for malware at massive scale.


Why India Is A Hotspot For Data Labelling Services?

“Numerous data labelling firms have sprung up to address this growing need, and many of them are tapping into a global pool of ‘gig workers’ that can get this done effectively. Software and algorithms make it easier to divvy up tasks and have people work at their convenience. India offers a huge talent pool with ready access to smartphones and the ability to tap into a new income source or to supplement their earnings. Time difference, in this case, can even be an asset,” said Girish Muckai, Chief Sales & Marketing Officer of HEAL Software Inc. “Training AI models to deliver high levels of accuracy is critical to success. However, labelling training data sets is tedious work. It’s time consuming, complex and requires significant workforce. The tech industry’s outsourcing boom in India and its large population, make it a growing hotbed of this precision work. Its people and skills position India as a key resource for years to come in an increasingly digital world,” said Lori McKellar, Senior Director, Product Marketing at OpenText. “India has emerged as a huge pool of employable workers to undertake data labelling jobs.


Lessons I've Learned From My Career in Technology

One of the things that I wish I had known earlier in my career is that finding your passion is the most crucial part of the job. Don't misunderstand me -- finding your passion doesn't mean that you'll be doing what you love every day. It's about finding a company, industry, or role that you believe can make a difference. Working in IT is challenging. You'll have hard deadlines to meet, clients to impress, customers to help -- and working nights, weekends and holidays are all an inevitability in most jobs. However, the thing that will push you through it and make it all worthwhile is if you're passionate about the work you do. How can you tell if you're passionate about a company or an industry? You get excited thinking about what the business or industry does. This is so important. If you're not excited about the potential impact of your work, you're not passionate about the industry. This passion will help to drive you through the more monotonous parts of your job. You're helping your customers: So many IT companies are now inventing problems to solve with their products instead of focusing on the issues consumers face. Look for a job that sees you actively helping consumers -- this will give you a sense of accomplishment at the end of the day.


The Rise of Asynchronous Collaboration and What It Means for Development Teams

Asynchronous collaboration and project management tools can serve as our panacea, an escape from the virtual spotlight and constant time-suck of video chats and conference calls. These tools offer us a respite by providing a means to collaborate very effectively through cards and boards filled with status updates, comments, files, and even visual workflows that can take the place (and, in some cases, improve upon) our beloved whiteboards. They can effectively take the place of non-productive meetings, allowing us to track our work, collaborate with our teammates, and achieve our objectives without the need for lights and cameras. Here, I’ll share information about some of the asynchronous collaboration and project management tools I find most useful and how they’ve helped us maximize productivity and collaboration. I must begin, though, with a couple of caveats. First, while the tools I’ll reference here are, on the whole, great, there are some drawbacks to asynchronous collaboration that you should be aware of, and I’ll go into detail about them a little further on. Second, these tools cannot and should not be considered permanent replacements for video calls. 



Quote for the day:

"Brilliant strategy is the best route to desirable ends with available means." -- Max McKeown

Daily Tech Digest - May 03, 2021

Reinforcement learning competition pushes the boundaries of embodied AI

Creating reinforcement learning models presents several challenges. One of them is designing the right set of states, rewards, and actions, which can be very difficult in applications like robotics, where agents face a continuous environment that is affected by complicated factors such as gravity, wind, and physical interactions with other objects. This is in contrast to environments like chess and Go that have very discrete states and actions. Another challenge is gathering training data. Reinforcement learning agents need to train using data from millions of episodes of interactions with their environments. This constraint can slow robotics applications because they must gather their data from the physical world, as opposed to video and board games, which can be played in rapid succession on several computers. To overcome this barrier, AI researchers have tried to create simulated environments for reinforcement learning applications. Today, self-driving cars and robotics often use simulated environments as a major part of their training regime. “Training models using real robots can be expensive and sometimes involve safety considerations,” Chuang Gan, principal research staff member at the MIT-IBM Watson AI Lab, told TechTalks.


ONNX Standard And Its Significance For Data Scientists

ONNX standard aims to bridge the gap and enable AI developers to switch between frameworks based on the project’s current stage. Currently, the models supported by ONNX are Caffe, Caffe2, Microsoft Cognitive toolkit, MXNET, PyTorch. ONNX also offers connectors for other standard libraries and frameworks. “ONNX is the first step toward an open ecosystem where AI developers can easily move between state-of-the-art tools and choose the combination that is best for them,” Facebook had said in an earlier blog. It was specifically designed for the development of machine learning and deep learning models. It includes a definition for an extensible computation graph model along with built-in operators and standard data types. ONNX is a standard format for both DNN and traditional ML models. The interoperability format of the ONNX provides data scientists with the flexibility to chose their framework and tools to accelerate the process, from the research stage to the production stage. It also allows hardware developers to optimise deep learning-focused hardware based on a standard specification compatible with different frameworks.


Microsoft warns of damaging vulnerabilities in dozens of IoT operating systems

According to an overview compiled by the Cybersecurity and Infrastructure Security Agency, 17 of the affected product already have patches available, while the rest either have updates planned or are no longer supported by the vendor and won’t be patched. See here for a list of impacted products and patch availability. Where patching isn’t available, Microsoft advises organizations to implement network segmentation, eliminate unnecessary to operational technology control systems, use (properly configured and patched) VPNs with multifactor authentication and leverage existing automated network detection tools to monitor for signs of malicious activity. While the scope of the vulnerabilities across such a broad range of different products is noteworthy, such security holes are common with connected devices, particularly in the commercial realm. Despite billions of IoT devices flooding offices and homes over the past decade, there remains virtually no universally agreed-upon set of security standards – voluntary or otherwise – to bind manufacturers. As a result, the design and production of many IoT products end up being dictated by other pressures, such as cost and schedule.


Automate the hell out of your code

Continuous integration is a software development principle that suggests that developers should write small chunks of code and when they push this code to their repository the code should be automatically tested by a script that runs on a remote machine, automating the process of adding new code to the code base. This automates software testing thus increasing the developers productivity and keeping their focus on writing code that passes the tests. ... If continuous integration is adding new chunks of code to the code base, then CD is about automating the building and deploying our code to the production environment, this ensures that the production environment is kept in sync with the latest features in the code base. You can read this article for more on CI/CD. I use firebase hosting, so we can define a workflow that builds and deploys our code to firebase hosting rather than having to do that ourselves. But we have one or two issues we have to deal with, normally we can deploy code to firebase from our computer because we are logged in from the terminal, but how do we authorize a remote CI server to do this? open up a terminal and run the following command firebase login:ci it will throw back a FIREBASE_TOKEN that we can use to authenticate CI servers.


15 open source GitHub projects for security pros

For dynamic analysis of a Linux binary, malicious or benign, PageBuster makes it super easy to retrieve dumps of executable pages within packed Linux processes. This is especially useful when pulling apart malware packed specialized run-time packers that introduce obfuscation and hamper static analysis. “Packers can be of growing complexity, and, in many cases, a precise moment in time when the entire original code is completely unpacked in memory doesn't even exist,” explains security engineer Matteo Giordano in a blog post. PageBuster also takes caution to conduct its page dumping activities carefully to not trigger any anti-virtual machine or anti-sandboxing defences present in the analyzed binary. ... The free AuditJS tool can help JavaScript and NodeJS developers ensure that their project is free from vulnerable dependencies, and that the dependencies of dependencies included in their project are free from known vulnerabilities. It works by peeking into what’s inside your project’s manifest file, package.json. “The great thing about AuditJS is that not only will it scan the packages in your package.json, but it will scan all the dependencies of your dependencies, all the way down. ...” said developer Dan Miller in a blog post.


Securing the Future of Remote Working With Augmented Intelligence

Emerging technology can potentially reshape the dimensions of organizations. Augmented reality, as well as virtual reality, will play a crucial role in office design trends that have already come into being. Architecture organizations are already dedicating a space for virtual reality. This is basically an area that is equipped with all the essential requirements of virtual reality. An immense amount of businesses are prone to taking this step as more meetings are held virtually to accommodate the spread-out workforce. Organizations are currently spending a hefty amount on virtual solutions and will continue to invest in the future. Director of business development at HuddleCamHD, Paul Richards, affirmed that “Numerous meeting rooms will become more similar to TV production studios instead of collaborative spaces. Erik Narhi, an architect as well as computational design lead at the Los Angeles office of global design company Buro Happold also agreed that in this current era, it is impossible to neglect augmented reality and virtual reality.” Hybrid work for home is not going anytime soon.


Risk-based vulnerability management has produced demonstrable results

Risk-based vulnerability management doesn’t ask “How do we fix everything?” It merely asks, “What do we actually need to fix?” A series of research reports from the Cyentia Institute have answered that question in a number of ways, finding for example, that attackers are more likely to develop exploits for some vulnerabilities than others. Research has shown that, on average, about 5 percent of vulnerabilities actually pose a serious security risk. Common triage strategies, like patching every vulnerability with a CVSS score above 7 were, in fact, no better than chance at reducing risk. But now we can say that companies using RBVM programs are patching a higher percentage of their high-risk vulnerabilities. That means they are doing more, and there’s less wasted effort. The time it took companies to patch half of their high-risk vulnerabilities was 158 days in 2019. This year, it was 27 days. And then there is another measure of success. Companies start vulnerability management programs with massive backlogs of vulnerabilities, and the number of vulnerabilities only grows each year. Last year, about two-thirds of companies using a risk-based system reduced their vulnerability debt or were at least treading water. This year, that number rose to 71 percent.


A definitive primer on robotic process automation

This isn’t to suggest that RPA is without challenges. The credentials enterprises grant to RPA technology are a potential access point for hackers. When dealing with hundreds to thousands of RPA robots with IDs connected to a network, each could become an attack vessel if companies fail to apply identity-centric security practices. Part of the problem is that many RPA platforms don’t focus on solving security flaws. That’s because they’re optimized to increase productivity and because some security solutions are too costly to deploy and integrate with RPA. Of course, the first step to solving the RPA security dilemma is recognizing that there is one. Realizing RPA workers have identities gives IT and security teams a head start when it comes to securing RPA technology prior to its implementation. Organizations can extend their identity and governance administration (IGA) to focus on the “why” behind a task, rather than the “how.” Through a strong IGA process, companies adopting RPA can implement a zero trust model to manage all identities — from human to machine and application.


Demystifying Quantum Computing: Road Ahead for Commercialization

For CIOs determining what the next steps for quantum computing are, they must first consider the immediate use cases for their organization and how investments in quantum technology can pay dividends. For example, for an organization prioritizing accelerated or complex simulations, whether it’s for chemical or critical life sciences research like drug discovery, the increase in computing performance that quantum offers can make all the difference. For some organizations, immediate needs may not be as defined, but there could be an appetite to simply experiment with the technology. As many companies already put a lot behind R&D for other emerging technologies, this can be a great way to play around with the idea of quantum computing and what it could mean for your organization. However, like all technology, investing in something simply for the sake of investing in it will not yield results. Quantum computing efforts must map back to a critical business or technology need, not just for the short term, but also the long term as quantum computing matures. CIOs must also consider how the deployment of the technology changes existing priorities, particularly around efforts such as cybersecurity. 


Leaders Talk About the Keys to Making Change Successful and Sustainable

Many organizations that have been around for a while have established processes that are hard to change. Mitch Ashley, CEO and managing analyst at Accelerated Strategies Group, who’s helped create several DevOps organizations, shared his perspective about why changing a culture can be so difficult. “Culture is a set of behaviors and norms, and also what’s rewarded in an organization. It’s both spoken and unspoken. When you’re in an organization for a period of time, you get the vibe pretty quickly. It’s a measurement culture, or a blame culture, or a performance culture, or whatever it is. Culture has mass and momentum, and it can be very hard to move. But, you can make cultural changes with work and effort.” What Mitch is referring to, this entrenched culture that can be hard to change, is sometimes called legacy cultural debt. I loved Mitch’s story about his first foray into DevOps because it’s a great place to start if you’re dealing with a really entrenched legacy culture. He and his team started a book club, and they read The Phoenix Project. He said, “The book sparked great conversations and helped us create a shared vision and understanding about our path to DevOps. ...”



Quote for the day:

"The secret of leadership is simple: Do what you believe in. Paint a picture of the future. Go there. People will follow." -- Seth Godin

Daily Tech Digest - May 02, 2021

In the Race to Hundreds of Qubits, Photons May Have "Quantum Advantage"

The more qubits are quantum-mechanically connected entangled together, the more calculations they can simultaneously perform. A quantum computer with enough qubits could in theory achieve a “quantum advantage” enabling it to grapple with problems no classical computer could ever solve. For instance, a quantum computer with 300 mutually-entangled qubits could theoretically perform more calculations in an instant than there are atoms in the visible universe. Ostensible quantum computing advantages aside, relative advantages of one quantum computing platform versus another are less clear. The quantum computers that tech giants such as Google, IBM and Intel are investigating typically rely on qubits based either on superconducting circuits or trapped ions. Those systems typically require expensive and elaborate cryogenics, keeping them just a few degrees (sometimes mere fractions of a single degree) above absolute zero. The expensive, bulky systems needed to keep qubits at such frigid temperatures can make it extraordinary challenging to scale these platforms up to high numbers of qubits.


Move Over Artificial Intelligence, There Is A New AI In Town

The lack of Data Scientists has caused many training efforts to focus on teaching the core of algorithms and enabling people from all walks of life to build artificial intelligence solutions via products that democratize or automate data science. What we need for Augmented Intelligence is different. We need people who are subject matter experts in their fields, like doctors, to understand just enough Artificial Intelligence to work collaboratively with one. This means they must have a level of Artificial Intelligence Literacy. AI Literacy can help individuals understand the core concepts of how artificial intelligence works, the context to understand its strengths and weaknesses in their application, the capabilities to apply their understanding to solve problems, and the creativity to see how to innovate with it for their domain. Why are all four of these Cs important? Augmented Intelligence is about combining the intelligence of humans and machines, where both contribute, rather than humans becoming the caretakers of the machines. This requires the human to not just understand the concepts and have the capability to apply them in a specific context, but also to apply human creativity to envision new uses of the human/machine combo.


Low-code and no-code is shifting the balance between business and technology professionals

IT departments are trying to balance two things. On one side, they see a growing interest from business experts to solve their own workgroup-level problems themselves. On the other hand, they want to maintain control and governance over any software created in the organization. It's often the application development managers, struggling with never-ending backlogs and short-staffing who are most bullish on enterprise low-code -- they see a way to address both of these sets of demands. With low-code and no-code, they can give business units skill-appropriate tools to solve some of their own problems, while ensuring that anything they build goes through a centralized process for quality and security - the same process their enterprise software development goes through. ... This wave of low-code adoption is nothing but good news for traditional software developers. In our customer base, developers get to deliver solutions faster, avoid rework and technical debt, and elevate the problem space they operate in. That is, they get to work on harder, more interesting software problems - say software architecture, or working through the creation of complex logic.


Cyber Extortion Thriving Thanks to Accellion FTA Hits

Some ransomware gangs run their own attacks, but many operations now function using a ransomware-as-a-service model, in which operators develop code and infrastructure and affiliates infect victims. For every victim who pays, the operator and affiliate split the profits, with affiliates often keeping 60% to 70%. Experts say this division of labor has helped RaaS operations maximize profits - especially if they can recruit highly skilled affiliates. The type of ransomware most encountered by victims assisted by Coveware in Q1 was Sodinokibi, aka REvil, followed by Conti, Lockbit, Clop and Egregor. All are prolific RaaS operations. But competition remains fierce between RaaS operations as they attempt to recruit top affiliates to maximize their paydays, including via big game hunting, which is hitting larger victims for the prospect of bigger ransom returns. Seeking fresh avenues for finding new victims, some RaaS operations have begun running campaigns using malware written to crypto-lock Unix and Linux systems. Defray777, Mespinoza, Babuk, Nephilim and Darkside have already deployed such code, and Sodinokibi suggests it will do so, Coveware says.


Lessons in simplicity strategy

Six, as I have written before, is a useful organizing number, and is the smallest in a range of numbers described in mathematics as “perfect.” A number is perfect if it is a positive integer that is equal to the sum of its divisors. Six, of course, is the sum of one, two, and three. Six is also workable, definable, measurable, and memorable. If you adopt a small-is-better mentality (I love the two-pizza rule, which says if your working group can’t be fed by two pizzas, it’s probably too large), six gives you a guideline that can be established and maintained fairly quickly. The hexagon, nature’s diamond, lends itself beautifully to organizational management because of the way it embodies interconnection, resilience, and economy. Like the equilateral triangle and the square, the hexagon tessellates, which is to say it can connect to the same shape without gaps (unlike, say, circles, an all-too-popular PowerPoint intersecting image). This is crucial, because so much of what people do intersects and connects. The hexagon is a powerful visual aid and connects us to network theory in which “edges” play a crucial role.


Lawmakers Seek to Expand CISA's Role

A five-year national risk management cycle review by CISA, as called for by Hassan and Sasse, is needed to better address threats to critical infrastructure, says Tim Wade, a former network and security technical manager with the U.S. Air Force. He's now a technical director at the security firm Vectra AI. "Failure to have a credible and timely recovery strategy places nontrivial strain on detection and response requirements, whereas protecting and enabling rapid recovery removes tension from the entire system," Wade says. "This move marks a step in the right direction, and even as the road ahead is long, we all have a vested interest in its success." The various Congressional proposals regarding CISA could go a long way toward addressing threats to IT and operation technology networks, says Joseph Carson, chief security scientist and advisory CISO at security firm Thycotic. "One of the most vital areas to focus on is regaining visibility and control of the network as a whole, including the disparate IT and OT systems. In particular, this means having a firm command of how systems are accessed," Carson says.
If you step back and think about the conversation as an opportunity to learn versus the need to defend, it helps open the aperture into a dialogue vs a debate. Somewhere along life’s path (we usually refer to this as getting older) learning is replaced with knowledge, yet if we make the choice to continuously learn from other’s perspectives, learning can be lifelong, and knowledge can grow vs. sustain. Consider that openness to experience—the degree to which you are interested in exploring new ideas, nurturing your hungry mind, and replacing routine with unconventional and unfamiliar adventures—decreases as we get older. The more we know, the less interested we are in learning something new. As Lisa Feldman Barrett notes in her recent book, our brains are not for thinking: they are for saving energy and turning decision into autopilot mode. It’s okay to want to listen and learn and still hold onto your own beliefs and values. The act of listening doesn’t indicate agreement. In fact, it is a lot easier to agree when we don’t listen to one another. Remember that the difference between judging and pre-judging is understanding and that in order to understand you really need to be willing to listen and learn.


Australia's eSafety and the uphill battle of regulating the ever-changing online realm

Appearing before the Parliamentary Joint Committee on Intelligence and Security as part of its inquiry into extremist movements and radicalism in Australia, Inman Grant said while the threshold is quite high in the new powers around take-down requests, it will give her agency a fair amount of leeway to look at intersectional factors, such as the intent behind the post. "I think that the language is deliberately -- it's constrained in a way to give us some latitude ... we have to look at the messenger, we have to look at the message, and we have to look at the target," she said on Thursday. The Act also will not apply to groups of people, rather simply individuals. The commissioner guessed this was due to striking a balance on freedom of expression. "To give us a broader set of powers to target a group or target in mass, I think would probably raise a lot more questions about human rights," she said. She said it's a case of "writing the playbook" as it unfolds, given there's no similar law internationally to help guide the Act. Inman Grant said she has tried to set expectations that she isn't about to conduct "large scale rapid fire".


Corporate e-waste: The unfashionable global crisis

Ensuring redundant business technology is reused is an important way to reduce our environmental impact. When a device reaches the end of its first lifecycle and a business needs to upgrade, that device still holds value, both to the company and to a second user. Giving a device a second life reduces carbon emissions and electronic waste. Every laptop that is reused, displaces the need to remanufacture a new one, also saving natural resources. We like to say that every time we rehome a device, we’re saving the planet one laptop at a time. Dumping old devices also represents a wasted opportunity to help people access used IT equipment at more affordable prices. Not everyone needs or can afford new tech, so a vibrant secondhand market is crucial to closing the digital divide. Plus, while the disposal of IT equipment is hassle and an expense for businesses, ensuring old devices are reused gives equipment extra value – a value which can be used against the cost of purchasing new IT devices. With organisations faced with accelerating the shift to mobile devices, this could free up much needed cash to fund digital transformation projects


The Biggest Data Management Mistake Chief Data Officers Make

Most advice to Chief Data Officers in these situations comes down to this: Ensure that your data strategy provides business value — e.g., increasing revenue and improving cost control — and risk management – e.g., inclusive of compliance and privacy. While this may seem like the right advice, it puts the onus on the CDO to propose business value to the rest of the C-suite instead of supporting the initiatives in which leaders already have invested. These business initiatives require data and analytics that the CDO can provide. But if CDOs initiate their own projects and separate business value propositions, the existing business initiatives are often left without the data management platform they require. This results in a divergence of projects that don’t support each other: the business initiatives will continue to generate data while the CDO builds a “foundation” of data, creating yet another silo. The difference between proposing and supporting business value may seem subtle, but it’s actually profound. Most IT leaders running enterprise database management today are building up programs that have value independent of major business initiatives.



Quote for the day:

"Every great leader can take you back to a defining moment when they decided to lead." -- John Paul Warren