Daily Tech Digest - April 12, 2021

Coding interviews are terrible. Can we make them better?

A typical coding interview will involve presenting a candidate with a technical problem, which they'll have to solve in real time and in front of the interviewing panel. While these typically vary from one company to another, one common format is whiteboard coding, whereby a candidate might be asked to provide a solution to a problem involving a binary tree. It was a binary tree task that drew the ire of Howell in his now-famous tweet. These are a fairly typical part of technical interviews, designed to assess a candidate's ability to solve a programming problem and show their thinking 'out loud'. Still, most programmers say this isn't representative of anything they'd have to do in their day-to-day job, and say it's an outdated means of assessing candidates that doesn't reflect their skill level. "These little challenges don't show the greater skill sets, which for me are the ability to construct large programs," says Howell. "It's not about small algorithms. It's about the design of larger systems, and that's way more important." Howell also sees traditional coding interviews as being reflective of an industry that focuses too much on building at speed. "It's partly because the software industry moves so fast," he says.


How Augmented Reality Strengthens Biotech Manufacturing

Factories where engineers or scientists are using smart glasses to obtain virtual guidance, operators working with remote vendors to detect equipment failures in real-time, or interactive training sessions planned by directors located in another continent, are already here. “The barriers to adoption are decreasing as the AR industry becomes more robust,” notes Stracquatanio. Probably, the biggest advantage of AR is it enables seeing the production process virtually, without the need to be there. “It’s a game-changer for the industry. Individuals can have eyes and ears on site at a moment’s notice to address an emerging issue, or to host routine remote collaboration sessions,” Stracquatanio highlights. AR can also increase control over the manufacturing process. Pharma and biotech companies cannot afford mistakes during the production phase. A little oversight might lead to serious consequences such as having to start from scratch, which can be very expensive and time-consuming. A recent example is that of Johnson & Johnson’s manufacturing partner Emergent BioSolutions, whose workers erroneously mixed ingredients from two different Covid-19 vaccines; this led to wasting around 15 million vaccine doses.


Fileless Malware, Endpoint Attacks on the Rise

Cybercriminals are increasingly leveraging fileless malware, cryptominers and encrypted attacks, targeting users both at remote locations as well as corporate assets behind the traditional network perimeter. These were among the findings of WatchGuard Technologies’ Internet Security Report for Q4 2020, which found fileless malware and cryptominer attack rates grew by nearly 900% and 25%, respectively, while unique ransomware payloads plummeted by 48% in 2020 compared to 2019. The report also found botnet malware targeting IoT devices and routers became a top strain, among them the Linux.Generic virus (also known as “The Moon”), malware which is part of a network of servers that directly targets IoT devices and consumer-grade network devices, like routers, to exploit any open vulnerabilities. Total network attack detections grew by 5% in Q4, reaching their highest level in more than two years, while total unique network attack signatures showed steady growth as well, with a 4% increase compared with the third quarter of 2020. “We believe the increase in endpoint attacks between 2019 and 2020 is largely due to the widespread rise of remote work in response to the global pandemic,” Corey Nachreiner, WatchGuard CTO, explained.


Could social media networks pave the way towards stronger authentication?

Passwords are still the most common form of user authentication, “protecting” accounts, devices and systems, but alone, they don’t provide strong security. Not only that, they don’t offer the best user experience. Many passwords don’t even meet the minimum criteria of being unique and complex. People reuse passwords across accounts because they simply can’t keep track of all the logins they have. They choose passwords that are easy to remember to ease the burden, but that makes them easy to guess too. In fact, our research shows that people reuse their passwords across an average of ten personal accounts, while ‘123456’ still topped the list for the most common password in 2020. Even when they have chosen well, their unique and complex password can still fall victim to a modern phishing attack. After all, even an exemplary password can’t protect an account if the holder has been tricked into providing the information. From a user experience perspective, you have the stress and strain of choosing a unique, complex password each time that also meets the criteria demanded by the platform or service provider.


Nation-state cyber attacks double in three years

“Cyber crime economies are shaping the character of nation-state conflicts,” said McGuire. “There is also a ‘second generation’ of cyber weaponry in development that draws upon enhanced capabilities in computing power, AI [artificial intelligence] and cyber/physical integrations. One such example is ‘Boomerang’ malware, which is ‘captured’ malware that can be turned inward to operate against its owners. “Nation states are also developing weaponised chatbots to deliver more persuasive phishing messages, react to new events and send messages via social media sites. In the future, we can also expect to see the use of deepfakes on the digital battlefield, drone swarms capable of disrupting communications or engaging in surveillance, and quantum computing devices with the ability to break almost any encrypted system.” To ease rising tensions and prevent nation states from being drawn into more hostile cyber attacks, 70% of the expert panel said they thought some kind of international treaty would ultimately be necessary – this is by no means a new idea – but just 15% of them thought a cyber convention would be agreed on this decade, 37% said it was more likely to come in the 2030s, and 30% said it would probably never happen.


Quantum computer based on shuttling ions is built by Honeywell

Trapped-ion qubits were used to implement the first quantum logic gates in 1995, and the proposal for a quantum charged coupled device (QCCD) – a type of quantum computer with actions controlled by shuffling the ions around – was first made in 2002 by researchers led by David Wineland of the US National Institute of Standards and Technology, who went on to win the 2012 Nobel Prize for Physics for his work. Quantum gates have subsequently been demonstrated in multiple platforms, from Rydberg atoms to defects in diamond. The quantum computing technology first used by IT giants, however, was solid state qubits. In these, the qubits are superconducting circuits, which can be mounted directly on to a chip. These rapidly surpassed the benchmarks set by trapped ions, and are used in record-breaking machines from IBM and Google: “Working with trapped ions, I would be asked by people, ‘Why aren’t you working with superconducting qubits? Isn’t that race pretty much already settled?’,” says Winfried Hensinger of the UK’s University of Sussex. Recently, however, the progress made using superconducting circuits appears to be slowing as quantum computers integrate more and more qubits.


How MPC can solve blockchain’s data privacy conundrum

MPC, or multi-party computation, solves for confidentiality by utilizing a network of computation nodes that compute directly on encrypted data while maintaining zero knowledge about the data. For example, an employer may want to find out the average age of each of their employees. For privacy reasons, these employees may not be willing to share their ages, so through secret sharing, the employees can share their age without their age being publicly identifiable to them. The possibilities this technology enables are endless, and one must only think of the benefits such technology could bring to industries such as banking and insurance. While MPC solves for privacy, blockchain itself can protect the individual data against data breaches via the decentralization of sensitive information. Alone, blockchain lacks the infrastructure required to ensure data remains private. ... Not only is the pairing of MPC technology and blockchain a better solution to safeguarding consumer data to those currently in existence, it is one of the most viable solutions that effectively deals with the monumental problem of data security.


How Do Large Firms Train ML Models At Scale?

GPipe is a distributed machine learning library that uses synchronous stochastic gradient descent apart from pipeline parallelism to train any DNN containing multiple sequential layers. GPipe partitions a model across various accelerators and spins small batches of training examples to even smaller batches. Hence, GPipe’s accelerators can operate parallelly and maximise the scalability of the training process. It allows easy deployment of more accelerators to train large models and further scale the performance without tuning hyperparameters. GPipe is a distributed machine learning library that uses synchronous stochastic gradient descent apart from pipeline parallelism to train any DNN containing multiple sequential layers. GPipe partitions a model across various accelerators and spins small batches of training examples to even smaller batches. Hence, GPipe’s accelerators can operate parallelly and maximise the scalability of the training process. It allows easy deployment of more accelerators to train large models and further scale the performance without tuning hyperparameters.


Data validates future of work looks quite different than pre-pandemic

Both private and professional lives are slowly readopting former practices, such as eating inside a restaurant. As we cautiously return to normal, road warriors are ready to get back on the road, but we're also excited to keep some of the improved healthcare, restaurant and retail experiences we've discovered over the last year. Respondents cited the top four things they said they missed while working remotely: Spontaneous interactions with colleagues I wouldn't have talked to otherwise; Simply being around other people; Exposure to a diversity of perspectives and ideas; and Productivity. Qualtrics discovered that respondents found improved productivity (51%) and well-being—two times more likely than those who say it declined—during the pandemic lockdown. Managers concur: 55% said their direct reports have been more productive. Generationally, 54% of millennials said they're more productive, 53% of Gen Z, 48% of Gen X and 34% of boomers agree. Productivity has improved due to flexible schedules (31%), no commute (26%), more control over workspace (24%), ability to focus with fewer work interruptions (24%) and more privacy and personal space (23%).


The benefits of cyber threat intelligence

All of this saves time and helps them be more effective at mitigating threats and reducing risks. CTI allows the SOC to see beyond the perimeter, so they are aware of threats before they hit their infrastructure. That allows the SOC time to prepare, tweak defenses, such as deploying specific monitoring rules or knowing what to be on the lookout for. And when dealing with incidents or alerts, having this additional context allows them to place the individual alert, or maybe alerts they are dealing with, in the wider context of who is behind it, what their aims are, while typical next steps would be, or maybe even what must have gone before for this to occur. All of that makes it easier to determine how to respond. And when dealing with multiple alerts or incidents, as SOCs do, having this context allows you to prioritize, separating the wheat from the chaff as it were. And that’s critical as many SOCs are resource strained, and so knowing which items to focus on can help with making the most effective use of limited resources.



Quote for the day:

"It's good to trust others but, not to do so is much better." -- Benito Mussolini

Daily Tech Digest - April 11, 2021

One-stop machine learning platform turns health care data into insights

To turn reams of data into useful predictions, Cardea walks users through a pipeline, with choices and safeguards at each step. They are first greeted by a data assembler, which ingests the information they provide. Cardea is built to work with Fast Healthcare Interoperability Resources (FHIR), the current industry standard for electronic health care records. Hospitals vary in exactly how they use FHIR, so Cardea has been built to "adapt to different conditions and different datasets seamlessly," says Veeramachaneni. If there are discrepancies within the data, Cardea's data auditor points them out, so that they can be fixed or dismissed. Next, Cardea asks the user what they want to find out. Perhaps they would like to estimate how long a patient might stay in the hospital. Even seemingly small questions like this one are crucial when it comes to day-to-day hospital operations — especially now, as health care facilities manage their resources during the Covid-19 pandemic, says Alnegheimish. Users can choose between different models, and the software system then uses the dataset and models to learn patterns from previous patients, and to predict what could happen in this case, helping stakeholders plan ahead.


8 Ways Digital Banking Will Evolve Over the Next 5 Years

The initial shift toward digital financial services saw an ad hoc response from regulators. As new technologies come into play and tech giants like Google and Apple become increasingly disruptive in the financial industry, these transformations will force policymakers to identify emerging threat vectors and comprehensively address risk. In contrast to today’s mostly national systems of oversight, a global approach may be necessary to ensure stability in the sector, and we may see the rise of new licensing and supervisory bodies. The future of digital banking appears bright, but the unprecedented pace of innovation and shifts in consumer expectations demand a new level of agility and forward-thinking. Even as financial institutions attempt to differentiate themselves from competitors, co-innovation will become an integral part of success. People and technology will both play critical roles in these developments. Tech capabilities and digital services must be extremely resilient, constantly available at the time of customer need. Human capital, however, will be as crucial as any other asset. Leaders will have to know how to upskill, reskill and retain their talent to promote innovation. 


A new era of innovation: Moore’s Law is not dead and AI is ready to explode

We sometimes use artificial intelligence and machine intelligence interchangeably. This notion comes from our collaborations with author David Moschella. Interestingly, in his book “Seeing Digital,” Moschella says “there’s nothing artificial” about this: There’s nothing artificial about machine intelligence just like there’s nothing artificial about the strength of a tractor. It’s a nuance, but precise language can often bring clarity. We hear a lot about machine learning and deep learning and think of them as subsets of AI. Machine learning applies algorithms and code to data to get “smarter” – make better models, for example, that can lead to augmented intelligence and better decisions by humans, or machines. These models improve as they get more data and iterate over time. Deep learning is a more advanced type of machine learning that uses more complex math. The right side of the chart above shows the two broad elements of AI. The point we want to make here is that much of the activity in AI today is focused on building and training models. And this is mostly happening in the cloud. But we think AI inference will bring the most exciting innovations in the coming years.


Rethinking Ecommerce as Commerce at Home

Ecommerce is all grown up. It’s time to break away from the early-internet paradigm where online shopping was a new, “electronic” form of shopping. Today, almost all commerce involves varying degrees of digital elements (discovery, price comparison, personalization, selection, ordering, payment, delivery, etc.). The defining factor is not whether commerce is digital; rather, one defining factor is the optimal location for a retailer to meet a consumer’s needs. Shopping happens on a spectrum between home and the store. As such, ecommerce is better understood as commerce at home, and Amazon was the early winner. Great retailers focus on convenience or the experiential. In the new paradigm, certain retail truths persist. For example, all great retailers have focused primarily on either convenience retail or experiential retail. To be clear, any retail can be a great experience, but the priority matters. Amazon focuses ruthlessly on convenience. The outcome is a great customer experience. To drive growth, Amazon has prioritized speed and selection over consultation and curation. Amazon’s focus on convenience has yielded an (incredibly) high-volume, low-margin retail business.


These are the AI risks we should be focusing on

AI may never reach the nightmare sci-fi scenarios of Skynet or the Terminator, but that doesn’t mean we can shy away from facing the real social risks today’s AI poses. By working with stakeholder groups, researchers and industry leaders can establish procedures for identifying and mitigating potential risks without overly hampering innovation. After all, AI itself is neither inherently good nor bad. There are many real potential benefits that it can unlock for society — we just need to be thoughtful and responsible in how we develop and deploy it. For example, we should strive for greater diversity within the data science and AI professions, including taking steps to consult with domain experts from relevant fields like social science and economics when developing certain technologies. The potential risks of AI extend beyond the purely technical; so too must the efforts to mitigate those risks. We must also collaborate to establish norms and shared practices around AI like GPT-3 and deepfake models, such as standardized impact assessments or external review periods.


India Inc. must consider Digital Ethics framework for responsible digitalisation

An accelerated pace of digital transition, consumption of goods and services via app-based interface, and proliferation of data bring numerous risks such as biased decision-making processes being transferred to machines or algorithms at the development stage by humans, a Deloitte statement said on Friday. "These biases can be a threat to the reputation and trust towards stakeholders, as well as cause operational risks," it said. Partner, Deloitte India, Vishal Jain, said the pandemic compelled businesses and consumers to embrace digital technologies like artificial intelligence, big data, cloud, IoT and more in a big way. "However, the need of the hour is to relook at the business operations layered on digital touchpoints with the lens of ethics, given biases might arise in the due course, owing to a faster response time to an issue," he said. Societal pressure to do "the right thing" now needs a careful consideration of the trade-offs involved in the responsible usage of technology, Jain said, adding, its interplay becomes vital to managing data privacy rights while actively adopting customer analytics for personalised service.


How to Be a Better Leader By Building a Better Tribe

All of our journeys are exquisitely different, yet come with a unique set of challenges that can blur our leadership lens if not properly focused. This can become a snowball of personal detriment. Therefore, your mental, physical, and emotional health is just as important (if not more) than your professional and economic health—they are interrelated. Identify a therapist, wellness clinician, spiritual leader, life coach, physical trainer and/or anyone who can support your becoming an even greater version of yourself. Let's call this person the "healer". Make time for physical activity, healthy food choices and spending time with loved ones. Ensure the same investment you make in your team members, you also make in yourself. It is up to you to create your rituals for personal success. What will they entail? ... Similarly to curating a list of your tribal elders, remember that you are also an elder to a younger leader in your collective. We all were afforded a different set of societal privileges based on constructs of race/ethnicity, gender, sexual orientation, cognitive and physical abilities, etc. I think it’s important to utilize some of these privileges to be an ally/co-conspirator to someone who may not have the same position in society.


What is an enterprise architect? Everything you need to know about the role

The role of EA is closely connected to solutions architect, but tends to be broader in outlook. While EAs focus on the enterprise-level design of the entire IT environment, solution architects find spot solutions to specific business problems. EAs also work closely with business analysts, who analyse organisational processes, think about how technology might help, and then make sure tech requirements are implemented successfully. Looking upwards, EAs tend to work very closely with chief information officers (CIOs). While the CIO focuses on understanding the wider business strategy, the EA works to ensure that the technology that the organisation buys will help it to meet its business goals, whether that's improvements in productivity, gains in operational efficiency or developing fresh customer experiences, while also working with others – like the security team – to ensure everything remains secure. Nationwide CIO Gary Delooze is a former EA who says a really good enterprise architect will bring the business and IT teams together to create a technology roadmap.


How Blockchain Can Simplify Partnerships

To appreciate the ways in which blockchains can support complex collaborations, consider the task of shipping perishable goods across borders — a feat that requires effective coordination among suppliers, buyers, carriers, customs, and inspectors, among others. When the parties pass the cargo to another, a flood of information is transferred with it. Each party keeps their own record and tends to communicate with one partner at a time, which often leads to inconsistent knowledge across participants, shipping delays, and even counterfeit documentations or products. If, say, the buyer expects the goods to be constantly cooled throughout the shipping process and temperatures exceed agreed thresholds, a dispute is likely to occur among the buyer, the supplier, and the carrier, which can devolve into lengthy wrangling. The carrier may haggle over the liability to lower the compensation, arguing that customs delaying the transportation or the inspectors who improperly operated with the cargo are the ones to blame. The buyer will ask the supplier for remedy, who in turn needs to negotiate with the carrier. And so on. Problems like these can manifest in any collaboration that requires cumbersome information sharing among partners and may involve disputes in the process. 


Practical Points from the DGPO: An Introduction to Information Risk Management

Individuals are starting to pay attention to organizational vulnerabilities that compound risks associated with managing, protecting, and enabling access to information, ranging from poor data quality, insufficient methods of protecting against data breaches, inability to auditably demonstrate compliance with numerous laws and regulations, in addition to customer concerns about ethical and responsible corporate use of personal data. And as organizations expand their data management footprints across an increasingly complex hybrid multicloud environments, there has never been a greater need for systemic information risk management. ... In general, “risk” affects the way that a business operates in a number of ways. At the most fundamental level, it inhibits quality excellence. However, exposure to risks not only has an effect on project objectives, but it also poses threats of quantifiable damage, injury, loss, liability, or other negative occurrence that may be avoided through preemptive action. Using the Wikipedia definition as a start, we can define information risk as “the potential for loss of value due to issues associated with managing information.”



Quote for the day:

"The actions of a responsible executive are contagious." -- Joe D. Batton

Daily Tech Digest - April 10, 2021

15 Cybersecurity Pitfalls and Fixes for SMBs

We have obviously, the nation-state actor, which for a typical SMB would be kinda hard to protect against. Especially now as some evidence suggests that there were more than a thousand developers that contributed to the SolarWinds attack and so forth. And, I think that that might be something that’s not in the context of a typical SMB IT admin. But then you also have groups that are teenagers that are hacking around from Mom’s basement, right? You have those guys. You have legitimate criminal enterprises that are in a for-profit, that have balance sheets that have accountants that are actually doing things for profit and for their own revenue. And so when you look at the tools that are available to these organizations, if you look at the black market, and if you look at some of the things that are happening on the internet, you can actually buy toolkits for exploitation. You can buy toolkits that will allow some of these attacks to happen. And from the perspective of a malicious actor, the idea is not necessarily to target a specific business and to get their data. It’s kinda like fishing. You know, the larger net you cast, the more fish you’re going to catch. 


8 Security & Privacy Apps to Share With Family and Friends

Fifteen percent of consumers have left at least one online purchase process because of perceived security issues in the retail website, one report found last holiday season. Fourteen percent declined to purchase an item because of fears over how their data would be handled. And adoption of good security habits is on the uptick: Duo Lab's "2020 State of the Auth" report found more than half (53%) of respondents had used two-factor authentication (2FA), an increase from 28% two year prior. While most (71.5%) had experienced 2FA via SMS, more than one-third (36%) had used an authenticator app. We live in a time when most people spend hours a day on their mobile devices to do their jobs, keep in touch with friends and family, schedule appointments, handle their finances, and complete myriad other tasks. As smartphones handle more of our data, the need to secure them grows. There are several kinds of mobile apps to boost personal security and privacy, from password managers, to secure messaging apps, to anti-theft apps, and more. As a security pro, you may have your device locked down — but your family and friends may not know which steps they should be taking.


Threat matrix for storage services

Within cloud storage services, we witness users sharing various file types, such as Microsoft Office and Adobe files, and attackers taking advantage of this to deliver malware through email. Moreover, use cases of cloud storage go beyond internal interfaces, with business logic being shared with third parties. Therefore, the Azure Defender for Storage security team has mapped the attack surface undertaken by leveraging Storage service. This post reflects our findings based on the MITRE ATT&CK® framework, which is a knowledge base for tactics and techniques employed in cyberattacks. MITRE matrices have become an industry standard and are embraced by organizations aiming to understand potential attack vectors in their environments and to ensure they have adequate detections and mitigations in place. While analyzing the security landscape of storage, and applying the same methodology we defined for Kubernetes, we noticed the resemblance and differences across techniques. Whilst Kubernetes underlies an operating system, its threat matrix is structured like MITRE matrices for Linux or Windows.


Visa Describes New Skimming Attack Tactics

Visa's Payment Fraud Disruption team reports that cybercriminals are increasingly using web shells to establish command and control over retailers' servers during payment card skimming attacks. "As a result, eSkimming, or digital skimming, is among the top threats to the payments ecosystem," according to the Visa report. The web shells enable fraudsters conducting digital skimming attacks on e-commerce sites to establish and maintain access to compromised servers, deploy additional malicious files and payloads, facilitate lateral movement within a victim's network and remotely execute commands, Visa says. The most common methods for deploying a web shell are malicious application plug-ins and PHP code, Visa reports. Visa reached its conclusions after studying 45 digital skimming attacks in 2020. In February, Microsoft reported spotting 140,000 web shells per month on servers from August 2020 to January 2021, which it said is almost twice the number from the same period the year before. These web shells, however, were not being used for retail attacks. Visa notes attacks skimming payment card data from online checkout functions of e-commerce sites have become more prevalent during the COVID-19 pandemic as consumers have shifted to online shopping.


Dodge Adversarial AI Attacks Before It's Too Late!

In this tech-oriented world where a number of hackers and technological advancements are emerging in parallel to each other, artificial intelligence has made big strides recently in understanding languages. Contrary to this, artificial intelligence can still suffer from potentially dangerous and alarming sorts of algorithmic insight. Research depicts how AI algorithms that parse and analyze algorithms can be tricked and deceived by precisely crafted phrases. A sentence that might seem appropriate to you may have the strange ability to dodge the AI algorithm. It is estimated by the expert community that by the year 2040, artificial intelligence will reach the capability to perform all the intellectual functions of human beings. This might seem frightening but with the few techniques outlined in this teachable, you will radically grow your possibilities of survival when encountering artificial intelligence. Deceiving facial recognition features and tricking speech-recognition features is child’s play for hackers and emerging cybercriminals. Meanwhile, adversarial attacks invite more conceptual and deeper speculation.


Digital transformation: 5 trends that could shift your strategy

Application development, modernization, and integration are central practices in digital transformations that help organizations launch new business capabilities, improve customer experiences, and drive business process efficiencies. Until recently, CIOs and IT leaders considered implementations as a build-vs.-buy decision or used an RPA platform to automate workflows. Many invested in maturing agile and DevOps to continuously deliver cloud-native microservices and applications when building applications. Then COVID hit, and more IT leaders pursued low-code and no-code platforms to accelerate application development. Having multiple approaches to develop and support application development and integration is beneficial, but today, a growing number of options provide a complete hyperautomation platform. Hyperautomation app dev platforms have a mix of low-code, no-code, automation, and machine learning capabilities, provide out-of-the-box DevOps capabilities, and align the dev lifecycle to agile processes. Collectively, they can accelerate the development process and improve the productivity and quality of development efforts.


Using a schema registry to ensure data consistency between microservices

If Microservice A is holding data in a structure that is incongruent with Microservice B’s schema, some mapping will need to be done. There’s no magic. But, at the least, the developer writing the data exchange code will be aware of the conditions to satisfy because Microservice B’s data schema is well known. It’s not a question of reverse engineering some code in play and then having to figure out the mapping. Having the reliability provided by a single source of truth is a definite time-saver. Another area where a schema registry provides significant value is around validation. In the world of data management, there are few experiences more disappointing than writing a bunch of data validation code based on a given example, only to have the code become worthless because the underlying data schema you used was changed by a Data Architect somewhere upstream in the development process. Using a schema registry minimizes the problem. In some cases, using a schema registry makes the issue goes away altogether. The way it works is that when it comes time to validate some data, the developer will get the schema associated with the submitted data from the domain’s schema registry.


US Blacklists 7 Chinese Supercomputer Entities

Citing national security concerns, the U.S. Department of Commerce this week placed seven Chinese supercomputer organizations on the Entity List, which effectively bars them from receiving supplies or components from American companies. Commerce Secretary Gina M. Raimondo notes that the high-performance computing technologies developed by these entities could be used in weapons of mass destruction programs. "Supercomputing capabilities are vital for the development of many - perhaps almost all - modern weapons and national security systems, such as nuclear weapons and hypersonic weapons," Raimondo says. "The Department of Commerce will use the full extent of its authorities to prevent China from leveraging U.S. technologies to support these destabilizing military modernization efforts." Now that these organizations have been placed on the Entity List, the Commerce Department will require them to apply for a special license to do business with U.S. companies or receive supplies or components from American firms. The department's Bureau of Industry and Security must review and then approve or deny all license applications for organizations on the Entity List. 


Crossing the Line: When Cyberattacks Become Acts of War

The Cold War concept isn’t outdated. In the decades since the fall of the Soviet Union, the battleground has simply shifted from conflicts between ideological proxy governments to cyberspace. And the opponents have grown from a few primary nations into a broad range of sovereign threat actors. The question is, when does a cyberattack cross the line between a criminal action or mere prank, to an act of war? Is it the nature of the victim? The nature of the attacker? The nature of the damage? Or a combination of them all? To be sure, this is not a determination for cybersecurity professionals to make. Our role is to defend IT assets for our organizations by reducing risk, mitigating threats, remediating the situation after an attack, and generally trying to keep everything running safely and smoothly. It doesn’t matter whether we are facing a script kiddie trying to deface a website, a political hacktivist trying to make a statement, a cybercriminal trying to steal or ransom our data, or a state actor trying to steal confidential information. Our goal is to keep them out, and minimize the damage when they do manage to get in. The only thing that changes is how well-resourced and tenacious our opponents are.


4 Body Language Mistakes and How to Fix Them

When engaged in a difficult conversation, without empathizing with how the other person might be feeling in the moment, we may appear cold, unemotional, and downright rude. By adopting kindness and warmth in our body language, we can convey the right message without necessarily making them feel bad. When someone is passed up for a promotion, showing an attitude of indifference without understanding the value it holds in their life can make them resent you. Body language that shows presence and concern by giving them an opportunity to express their feelings can build better relationships. When a co-worker is grieving a personal loss, you may appear too intrusive in your body language when all they need is space to let the feelings subside. It could be a personal preference or a cultural nuance, but without understanding their context you may actually do more harm than good. When dealing with difficult people, your body language may switch to a fight-or-flight response. But, if you take a moment to analyze the situation without being at the effect of a fundamental attribution error, you may understand the rationale behind their behavior.



Quote for the day:

“Prove your integrity day-by-day, by keeping promises.” -- S. Chris Edmonds

Daily Tech Digest - April 09, 2021

Seeing, doing, and imagining

Association, which Pearl, a Turing Award winner, identifies as the first of three steps on his ladder of causation, won’t help executives answer many of the questions they need to ask when formulating corporate strategy, making investment decisions, or setting prices. To answer questions such as, “What will raising prices by 10 percent do to revenues?” you need to start climbing Pearl’s ladder. Intervention is the second step on the ladder. “Intervention ranks higher than association because it involves not just seeing but changing what is,” Pearl writes. That’s why companies are running scads of randomized controlled experiments these days. They are changing things on a small scale to figure out what effects an action will produce on a large scale. Real-world experiments aren’t a necessity — you can get a machine to figure out the effects of an intervention without actually changing anything in the real world. ... The third and highest rung on Pearl’s causation ladder is counterfactuals. Pursuing causation at this level means determining what would have happened if your company had done something in the past. For instance, what would revenues be today if you had cut prices by 10 percent a year ago?


The time is right for passwordless authentication

People just can’t be trusted to set reliable passwords, to change them frequently, to make sure they are strong, and to keep them secure. Forcing password change simply creates bad feeling and password reuse. Two-factor authentication is little better as a solution. It still relies on a password, often with a second PIN disclosed to a mobile phone. I’ve heard that some businesses and schools are trying to implement two-factor solutions, but users do not feel comfortable disclosing a private mobile number as a means to authenticate and log on, so the business needs to provide a second phone to the user, which is expensive and gives the user the task of carrying two phones around. Asking people to do more to achieve a goal than they were doing before is a sure-fire way to disgruntle them. Passwordless authentication removes all of these problems. It gives end-users less to remember, and less to think about. Login is faster, easier, and in comparison to tapping in passwords, waiting for a text to come through and tapping in a PIN, it is seamless and painless.


AI can stem the tide of increasing fraud and money laundering

Rather than having developers rewrite systems each time legislation changes, the new breed of AI-enabled RegTech can ‘learn’, interpret and comply with applicable laws, including KYC and AML. No system will ever be perfect – there is still the need for human oversight and there is still the possibility for criminals to find loopholes. These criminals are increasingly using technology to exploit weak links in regulatory frameworks, but as fast as they can move to deploy new schemes, machine learning systems will be able to counter them. AI-based technology has moved beyond an experimental phase and is ready to become a competitive differentiator in financial services, but there is still a level of reticence on the part of the industry when it comes to what many perceive as handing over compliance to machines. Traditionally, banks and other companies that handle monetary transactions have had to be conservative in nature. Data tends to be housed in silos, often on legacy systems, rather than having it be visible across the whole organisation, which allows AI-based systems to get the greatest value.


Root Cause Analysis for Data Engineers

In theory, root causing sounds as easy as running a few SQL queries to segment the data, but in practice, this process can be quite challenging. Incidents can manifest in non-obvious ways across an entire pipeline and impact multiple, sometimes hundreds, of tables. For instance, one common cause of data downtime is freshness — i.e. when data is unusually out-of-date. Such an incident can be a result of any number of causes, including a job stuck in a queue, a time out, a partner that did not deliver its dataset timely, an error, or an accidental scheduling change that removed jobs from your DAG. In my experience, I’ve found that most data problems can be attributed to one or more of these events: An unexpected change in the data feeding into the job, pipeline or system; A change in the logic (ETL, SQL, Spark jobs, etc.) transforming the data; An operational issue, such as runtime errors, permission issues, infrastructure failures, schedule changes, etc. Quickly pinpointing the issue at hand requires not just the proper tooling, but a holistic approach that takes into consideration how and why each of these three sources could break.


Gamifying machine learning for stronger security and AI models

Computer and network systems, of course, are significantly more complex than video games. While a video game typically has a handful of permitted actions at a time, there is a vast array of actions available when interacting with a computer and network system. For instance, the state of the network system can be gigantic and not readily and reliably retrievable, as opposed to the finite list of positions on a board game. Even with these challenges, however, OpenAI Gym provided a good framework for our research, leading to the development of CyberBattleSim. CyberBattleSim focuses on threat modeling the post-breach lateral movement stage of a cyberattack. The environment consists of a network of computer nodes. It is parameterized by a fixed network topology and a set of predefined vulnerabilities that an agent can exploit to laterally move through the network. The simulated attacker’s goal is to take ownership of some portion of the network by exploiting these planted vulnerabilities. While the simulated attacker moves through the network, a defender agent watches the network activity to detect the presence of the attacker and contain the attack.


Which Industries Would Benefit the Most From Agile Innovation

It may seem surprising that the financial sector is struggling to reach its innovation goals. However, Financier Worldwide found in 2015 that 90% of leaders admitted there was a lack of focus on radical innovation. Several years later, Deloitte’s report ‘Regulatory Trends Outlook for 2018’, claimed the financial industry was being hindered by a ‘legacy infrastructure’ that would take years to transform. For example, a focus on traditional product development means that customer and end-user feedback can’t be incorporated into the development process. Agile methods could rectify this by implementing new collaborative and customer-focused processes to product development. Teams could use a centralised system for the development of prototypes, which would be shared internally in a project’s initial phases. They can then conduct beta testing with a select group of end-users, with feedback incorporated iteratively into the final stages. Another issue is how increasingly stringent regulations may be inhibiting innovation. Financial firms are set to spend an estimated 10% of their revenue on compliance costs by 2022.


Why machine learning struggles with causality

Why do machine learning models fail at generalizing beyond their narrow domains and training data? “Machine learning often disregards information that animals use heavily: interventions in the world, domain shifts, temporal structure — by and large, we consider these factors a nuisance and try to engineer them away,” write the authors of the causal representation learning paper. “In accordance with this, the majority of current successes of machine learning boil down to large scale pattern recognition on suitably collected independent and identically distributed (i.i.d.) data.” i.i.d. is a term often used in machine learning. It supposes that random observations in a problem space are not dependent on each other and have a constant probability of occurring. The simplest example of i.i.d. is flipping a coin or tossing a die. The result of each new flip or toss is independent of previous ones, and the probability of each outcome remains constant. When it comes to more complicated areas such as computer vision, machine learning engineers try to turn the problem into an i.i.d. domain by training the model on very large corpora of examples.


WhoIAM: Enabling inclusive security through identity protection and fraud prevention

IT decision-makers are usually quite tuned in to the challenges around the cost of acquiring new customers, keeping user data secure, and managing infrastructure costs. However, large groups of users are often left behind because of an inherent set of biases in identity security. For instance, authenticator apps, while secure, require a reasonably tech-savvy user. On-device biometrics such as a fingerprint sensor or retina scan create a dependency on newer, more powerful hardware. SMS-based MFA, while more readily available, is expensive both to our client and their end customers and is considered less secure than other authentication factors. Even onscreen identity verification challenges tend to be biased towards English speakers who don’t have visual impairments. Asking a non-native speaker to solve a CAPTCHA that identifies all “sidewalks” or “stop lights” often does not translate well, and CAPTCHAs are historically a poor option for the visually impaired. While these are important factors to solve for, consumer brands still have to strike the right balance between security, cost, and usability.


Five ways to control spiralling IT costs after disruption

With an ongoing need to optimise costs, many businesses are suddenly realising they have lost control of their SaaS spend. It’s now common for large businesses to have SaaS applications managed outside the IT department, multiple contracts with the same vendor, or even multiple vendors providing the same service to the business. To combat this, first you need to draw on technology solutions that will give you full visibility of all SaaS application licences and services within the business. Then you need to rationalise them. With SaaS sprawl likely to be coming from outside IT, one way of consolidating this spend is to use tools that leverage single sign-on (SSO) data stored within an organisation’s network to identify hidden licences. Once you see the full picture, you can assess where best to cut back and which licences are redundant. Following on from this point, you need to introduce more accountability for SaaS usage and spend together with strict procurement processes and user chargeback. That’s because services like file storage and collaboration can be too easy to sign up for without the knowledge of IT.


Teknion CIO on the importance of fostering multi-generational talent

Technology has changed rapidly over the years, and each generation has come into technology at a different time. Their perspectives, therefore, are very different when it comes to technology, because it’s viewed from the moment they began leveraging it, as opposed to waiting for the latest technological innovation. It’s important to cultivate a multi-generational workforce in technology, especially, because everyone has different perspectives on the opportunities, challenges and shortfalls of tech. It’s a huge opportunity for every organisation to look at these perspectives and use technology in a better way because of that. ... As a CIO, my team and I have to provide the technology that keeps the company running, for our customers as well as the employees that work here every day. Having a perspective of a multi-generational workforce, not only within the technology department, but the wider business, allows us to enable digital transformation programs with more success. At the end of the day, we have to provide the technology, applications and tools that will help people to do their job better and not forcing them to work in a certain way.



Quote for the day:

"Failing organizations are usually over-managed and under-led." -- Warren G. Bennis

Daily Tech Digest - April 08, 2021

5 Reasons Why I Left the AI Industry

For decades, AGI has been the main goal driving AI forward. The world will change in unimaginable ways when we create AGI. Or should I say if? How close are we to creating human-level intelligent machines? Some argue that it’ll happen within decades. Many expect to see AGI within our lifetimes. And then there are the skeptics. Hubert Dreyfus, one of the leading critics, says that “computers, who have no body, no childhood and no cultural practice, could not acquire intelligence at all.” For now, it seems that research in AI isn’t even going in the right direction to achieve AGI. Yann LeCun, Geoffrey Hinton, and Yoshua Bengio, winners of the Turing Award — the Nobel Price of AI — in 2018, say we need to imbue these systems with common sense and we’re not close to that yet. They say machines need to learn without labels, as kids do, using self-supervised learning (also called unsupervised learning). That’d be the first step. However, there’s too much we don’t understand about the brain yet to try and build AGI. Some say we don’t need to create conscious machines to equal human intelligence.


The leap of a Cycldek-related threat actor

In the nebula of Chinese-speaking threat actors, it is quite common to see tools and methodologies being shared. One such example of this is the infamous “DLL side-loading triad”: a legitimate executable, a malicious DLL to be sideloaded by it, and an encoded payload, generally dropped from a self-extracting archive. Initially considered to be the signature of LuckyMouse, we observed other groups starting to use similar “triads” such as HoneyMyte. While it implies that it is not possible to attribute attacks based on this technique alone, it also follows that efficient detection of such triads reveals more and more malicious activity. ... Taking a step back from the FoundCore malware family, we looked into the various victims we were able to identify to try to gather information about the infection process. In the vast majority of the incidents we discovered, it turned out that FoundCore executions were preceded by the opening of a malicious RTF documents downloaded from static.phongay[.]com. They all were generated using RoyalRoad and attempt to exploit CVE-2018-0802.


Your Top .NET Microservices Questions Answered

Autonomy for teams to work with their microservices is a crucial benefit of architecting cloud-native apps. It is preferred to use independent database instances to give the teams the flexibility to roll out updates, security patches, bug fixes in production without breaking other microservices. Cloud-Native app architecture takes inspiration from the famous 12-factor app methodologies. One factor, “Backing Services,” states that the Ancillary resources like the data stores, caches, message brokers should be exposed via an addressable URL. Cloud providers offer a rich assortment of managed backing services. Instead of owning and maintaining the database yourself, we recommend checking out the available database options in the cloud. ... Monolithic apps can talk with microservices if their endpoints are reachable within the infrastructure or securely using a public endpoint. Microservices and their data can either be consumed synchronously via their endpoints or asynchronously through messaging like the Event Bus. As part of modernizing techniques, we recommend the strangler pattern, which helps in incrementally migrating a legacy system.


The First Time Jessica Alba Pitched Her Now-Unicorn Startup, It Failed. Here's How She Pivoted

Failure is part of every entrepreneur’s journey. When you care deeply about an idea, it can feel hard when you encounter people who don’t share or see your vision. Here are a few tips to stay the course when things aren’t going your way at first. ... Alba recruited friends at every step of the way who served as her sounding board. These people didn’t baby her and give her false hope; they asked the hard questions that exposed each and every possible weakness. Rely on trusted friends and confidantes to give you tough love, and your pitch will come off stronger to those who will have the final say. ... At first, everyone told Alba she should start with one product, then expand once that was successful. But this didn’t gel with Alba’s vision of a complete line of baby-safe products; the founder knew parents who wanted clean products wanted a brand that could provide multiple solutions. Ultimately, Alba ignored the conventional advice and launched with 17 products, which many people believed was too many. But because she didn’t compromise on that, either to venture capitalists or herself, the launch was a total success.


6 Best Practices for Remote Work by Agile Software Development Teams

The sudden shift to remote working was unexpected, but it was surprisingly well implemented in most cases. After months of remote working, let’s look at the progress being made by remote development teams. A recently published report on 50 remote agile development teams showed mixed results: 92% of teams are writing more code by an average 10%, which sounds good. Unfortunately, 63% of teams are releasing less frequently, with the total number of releases down by a worrying 21%; On top of this, the average release size is up by 64%, increasing risk and time to value. So before the COVID-19 pandemic, we had frequent, small releases and were very agile. Now we have infrequent, high-risk, large releases. This is not the ideal situation for agile, newly remote teams. ... First, review your remote team situation. Because we have lost the benefits of colocation, where constant interaction, easy pairing and water cooler conversations aid teamwork, we need to address collaboration in other ways. ... Remote working is a skill that requires time and effort to develop. Video conferencing is a great way to engage with your team. 


To cool datacenter servers, Microsoft turns to boiling liquid

Microsoft investigated liquid immersion as a cooling solution for high-performance computing applications such as AI. Among other things, the investigation revealed that two-phase immersion cooling reduced power consumption for any given server by 5% to 15%. The findings motivated the Microsoft team to work with Wiwynn, a datacenter IT system manufacturer and designer, to develop a two-phase immersion cooling solution. The first solution is now running at Microsoft’s datacenter in Quincy. That couch-shaped tank is filled with an engineered fluid from 3M. 3M’s liquid cooling fluids have dielectric properties that make them effective insulators, allowing the servers to operate normally while fully immersed in the fluid. This shift to two-phase liquid immersion cooling enables increased flexibility for the efficient management of cloud resources, according to Marcus Fontoura, a technical fellow and corporate vice president at Microsoft who is the chief architect of Azure compute. For example, software that manages cloud resources can allocate sudden spikes in datacenter compute demand to the servers in the liquid cooled tanks.


Generalists Vs. Tech Leaders : AI Adoption At Any Stage

When asked to identify intended users for their AI tools and technologies, over half of respondents identified clinicians as target users with healthcare providers as a close second. This is a big leap from AI being used primarily by data scientists and IT professionals, as was common in years past. This trickle-down effect of users persists even further when you consider the customers of mature organizations’ AI tools. ... As advances and applications of AI technologies grow, so do their intended user bases, so it’s important for all organizations to consider who they’re tailoring usability to. A patient who is interacting with a chatbot to schedule an appointment is a lot different than a radiologist using NLP to analyze the results of an X-Ray—and those are considerations that need to be evaluated when imagining the user experience. All organizations should be taking this into account, whether they’ve been deploying solutions for years now or are just getting started. As AI becomes more commercialized, newer players will take the lead from more mature companies that have had to evolve their customer base over the years.


Email overload? These new 'right to disconnect' rules could be the answer

Employees in Ireland are already protected by a number of labor laws. For example, they are not allowed to work more than 48 hours per week on average, except in very limited circumstances. The right to disconnect established in the new code, however, does not constitute a legal obligation: although the code's recommendations will be admitted as evidence in a court proceeding, failure to abide by the rules will not constitute an offence. Rather, the code of practice should be seen as a guide for both employers and employees, to come up together with appropriate working arrangements. This does not mean that all employees should start inflexibly working a nine-to-five schedule. The code of practice encourages employers to develop a "Right to Disconnect Policy" that informs workers of the normal working hours that will be reasonably expected from them, but also makes room for the occasional emergency that requires contacting staff outside of their workday, for example to fill in at short notice for a sick colleague. Any new policy should also acknowledge that some roles come with unconventional hours, such as those working across different time zones or requiring international travel. 


The best of both worlds: Making the most of your Hybrid IT strategy

The move towards greater use of the cloud has followed growing concerns on the management and protection of data. Cyber threats are continuing to evolve and accelerate, and the skills required to defend against are becoming more complex. Regulations such as the GDPR bring additional rights and safeguards for individuals, but the move towards cloud IT could expose a compliance gap – especially for organisations that handle personal data. Organisations that host their data on-premise in local storage systems should be in a position to identify the location of most, hopefully all, of their data, quite quickly and those that host data elsewhere could have concerns over not knowing where the data is stored. However, one of the challenges with public cloud adoption are the skills required to build and maintain it. Do organisations have the skills to ensure that data that is stored on-premise is secure and compliant? For many organisations, meeting compliance and regulatory requirements can be easier to achieve using private clouds. Just because organisations have outsourced their data storage, it doesn’t mean they can outsource responsibility for compliance, however.


Handcuffs Over AI: Solving Security Challenges With Law Enforcement

In cities like Chicago, the citizens of crime-ravaged communities fear the criminals more than they trust the police. The relationships between these communities and law enforcement are so strained that citizens do not provide evidence or testimony that will be used to successfully prosecute the criminals and guarantee deterrence. The same outcome, born of different history, creates a lack of coordination between law enforcement and private organizations being targeted by cybercriminals. The logs and data in systems owned and maintained by these organizations contain critical information that would enable successful prosecution of cybercrime to become the norm, which would deliver deterrence. Building SecOps on the incorrect outcomes of service and data availability have left the craft unprepared to align with law enforcement outcomes. The tools, workflows, and data provide little value to investigators and prosecutors. When an organization does report a crime to law enforcement, the responding agency must comb through a mess of disparate data locations and formats that is more complicated to process than a murder crime scene.



Quote for the day:

"Even the most honest human in authority often does not have the power to undo the damages that bad people do" -- Auliq Ice

Daily Tech Digest - April 07, 2021

How the recent pandemic has driven digital transformation in a borderless enterprise

Talking about the biggest innovations in the last year, Harishankar says AIML, “Data Science and digital core transformation are big areas for most companies. The whole digital core transformation is a big agenda and a lot of that is being run out of India, we are working with other centres as well but we have both existing talent, a lot of new hires with expertise in this area particularly around digital core transformation. Therefore I would say on the front end, commercial transformation, digital core transformation as well as Data Science, AIML areas, there is a lot that has been happening in the centre. In the new digital way of working it is very important to position your centre in that manner. We are leading innovation and not just part of it. We are equal partners in innovation across any centres in the world.” Talking about technologies that can be deployed or exploited from Indian centres, Bannerjee says once you start to enhance your digital adoption effectively, your store becomes your phone or your PC. You basically have the engineering capabilities to build your front end channels, your ability to quickly access the throughput.


MLOps Best Practices for Data Scientists

Today most ML journeys to get a machine learning model into production look something like this. As a data scientist, it starts with an ML use case and a business objective. With the use case at hand, we start gathering and exploring the data that seems relevant from different data sources to understand and assess their quality. ... Once we get a sense of our data, we start crafting and engineering some features we deem interesting for our problem. We then get into the modeling stage and begin tackling some experiments. At this phase, we are manually executing the different experimental steps regularly. For each experiment, we would be doing some data preparation, some feature engineering, and testing. Then we do some model training and hyperparameter tuning on any models or model architectures that we consider particularly promising. Last but not least, we would be evaluating all of the generated models, testing them against a holdout dataset, evaluating the different metrics, looking at performance, and comparing those models with one another to see which one works best or which one yields the highest evaluation metric.


How Uber’s Michelangelo Contributed To The ML World

The motivation to build Michelangelo came when the team started finding it excessively difficult to develop and deploy machine learning models at scale. Before Michelangelo, the engineering teams relied mainly on creating separate predictive models or one-off bespoke systems. But such short term solutions were limited in many aspects. Michelangelo is an end-to-end system that standardises workflows and tools across teams to build and operate machine learning models at scale easily. It has now emerged as the de-facto system for machine learning for Uber engineers and data scientists, with several teams leveraging it to build and deploy models. Michelangelo is built on open-source components such as HDFS, XGBoost, Tensorflow, Cassandra, MLLib, Samza, and Spark. It uses Uber’s data and the compute infrastructure to provide a data lake that stores Uber’s transactional and logged data; Kafka brokers for aggregating logged messages; a Samza streaming compute engine; managed Cassandra clusters; and in-house service provisioning and deployment tools. ... The platform consists of a data lake that is accessed during training and inferencing. 


Review: Group-IB Threat Hunting Framework

Group-IB’s Threat Hunting Framework (THF) is a solution that helps organizations identify their security blind spots and gives a holistic layer of protection to their most critical services both in IT and OT environments. The framework’s objective is to uncover unknown threats and adversaries by detecting anomalous activities and events and correlating them with Group-IB’s Threat Intelligence & Attribution system, which is capable of attributing cybersecurity incidents to specific adversaries. In other words, when you spot a suspicious domain/IP form in your network traffic, with a few clicks you can pivot and uncover what is behind this infrastructure, view historical evidence of previous malicious activities and available attribution information to help you broaden or quickly close your investigation. THF closely follows the incident response process by having a dedicated component for every step. There are two flavors of THF: the enterprise version, which is tailored for most business organizations that use a standard technology stack, and the industrial version, which is able to analyze industrial-grade protocols and protect industrial control system (ICS) devices and supervisory control and data acquisition (SCADA) systems.


How organisations can stay one step ahead of cybercriminals

To get ahead of the hackers, IT teams must be wary of unusual password activity, files being created and deleted quickly, inconsistencies in email usage, and data moving around in unexpected ways. One form of cyberattack is through hackers accessing software patch code and adding malicious code to the patch before it is delivered to customers as a routine update. This method of attack is especially devious because updates and patches are routine maintenance tasks, meaning IT teams are much less likely to be suspicious about them. Anti-malware solutions are also less likely to scrutinise incoming data like a patch from a trusted vendor. One key component that enables these types of attacks is credential compromise. Hackers are careful to obtain authentic credentials whenever possible in order to gain entry to the systems and data that they want to access inconspicuously, minimising their digital footprint. As a result, IT teams need to be wary of unusual password activity, such as an uptick in resets or permission change requests. ... Another powerful tool to reduce the risk of a cyber-attack is security awareness training. This can lower the chance of an incident such as a data breach by 70%. 


Testing Games is Not a Game

Games are getting more and more complex with the years. And gamers are a very demanding public. For those titles labeled as AAA (tripleA, high-budget projects) we are expected to deliver novel mechanics, mind blowing gameplay and exotic plot-twists. With each iteration, testing all of these becomes harder and the established ways of working need to be assessed and tweaked. That is quite hard, taking in consideration there are so many different kinds of games that it would be almost impossible to unify the way a game tester works. Committing to a general agreement on how to tackle testing processes, tools or even a job description with required skills is not feasible at all in the industry. From one game to another, from one game’s company to another, the required skills vary, the role changes. Also, due to the pretty common overuse of test cases and testing documentation, alive games grow exponentially and rather quickly into monsters. Game testers are usually forced to come up with better scoping techniques and risk/impact based testing. It opens up space for gaps where quality falls down with the consequent impact in gamers’ happiness.

Experian’s Identity GM Addresses Industry’s Post-COVID Challenges

"Today with so many bad actors focused on how to create automatic ways to fool systems into thinking they are legitimate, it's getting harder to validate that the business is transacting with a real person," Haller said. As a result, identity verification has gotten more sophisticated and better, too. For instance, it looks at IP addresses, device IDs, and GPS coordinates. Another field that is emerging is called behavior biometrics that captures data about how you interact with your keyboard and mouse and then uses that information about your behaviors to verify your identity, Haller said. "It is looking at how quickly you are typing, how you are using your phone, how you carry your phone," he said. "These are all behaviors associated with an identity. It might help determine whether someone has taken your device and is pretending to be you." To help IT security pros to tap into the most advanced technology for verifying identity and preventing fraud, Experian created CrossCore Orchestration Hub to connect the newest and most advanced services with customers. "We are trying to help our clients be more effective in discovering new risks and put new technology into production so they can protect themselves," Haller said.


Quantum computing just got its first developer certification.

"The focus right now is on preparing the workforce and skillsets so that businesses have an opportunity to leverage quantum computing in the future," Chirag Dekate, research lead for quantum at analysis firm Gartner, tells ZDNet. "But at the moment, it's a scattershot. One of the questions that always comes across from IT leaders is: 'How do I go about creating a quantum group?'" In many cases, they don't know where to start: according to Dekate, a certification like the one IBM unveiled will go a long way in pointing out to employers that a candidate has the ability to identify business-relevant problems and map them to the quantum space. Although adapted specifically for Qiskit, many of the competencies that are required to pass IBM's quantum developer certification exam are reflective of a wider understanding of quantum computing. Candidates will be quizzed on their ability to represent qubit states, on their knowledge of backends, or on how well they can plot data, plot a density matrix or plot a gate map with error rates; they will be required to know what stands behind the exotic-sounding but quantum-staple Block spheres, Pauli matrices and Bell states.


The importance of endpoint security in breaking the cyber kill chain

The term ‘kill chain’ was originally used as a military concept relating to structuring an attack into stages from identifying an adversary’s weaknesses to exploiting them. It consisted of target identification, forced dispatch to the target, decision, order to attack the target, and finally, destruction of the target. In simple terms it can be viewed as a stereotypical burglary, whereby the thief will perform reconnaissance on a building before trying to infiltrate and then go through several more steps before taking off with the valuables. ... For those defending systems and data, understanding the cyber kill chain can help identify the differing and varying defences you need in place. While attackers are constantly evolving their methods, their approach always consists of these general stages. The closer to the start of the cyber kill chain an attack can be stopped the better, so a good understanding of adversaries and their tools and tactics will help to build more effective defences. ... Endpoint protection (EPP) can detect and prevent many stages of the cyber kill chain, completely preventing most threats or allowing you to remediate the most sophisticated ones in later stages.


Interview With Karthik Kumar, Director Of Data Science For Auto Practise, Epsilon

As they say, “Data is the new code”. The machine learning code is only a small portion of the puzzle and would not suffice to take the model from a POC stage to production. Deployment is a process where it is a continuous data flow and learning journey, making ML an iterative process. Hence maintaining high quality in all phases of the ML life cycle is the most important task. The first step is to understand the business problem and to translate it into a statistical/machine learning problem. In this expedition, the quality of the data is critical and this is where a data scientist has to spend maximum of his efforts to better comprehend, and transform the data to understand its characteristics to build a robust machine solution leading to successful business outcomes. The amount of work on mining the right data, improving and understanding the data is the most important step which I would emphasise on my projects. An extensive feature engineering from the data would help build a strong data science model versus iterating the models on a fixed data set. My tip to budding data scientists would be to invest maximum time in gathering the right data, exploring and creating the features innovatively.



Quote for the day:

"Uncertainty is not an indication of poor leadership; it underscores the need for leadership." -- Andy Stanley