Daily Tech Digest - September 05, 2019

Charity on the Internet: How to identify scammers

We tell you how to distinguish genuine fundraising groups on Facebook from scammers preying on people's kindness
Facebook has been experiencing a wave of fake fundraising campaigns. The pattern is familiar: Attackers create groups from scratch to which they add a couple of posts. They provide bank transfer details along with a bunch of tear-jerking comments. The groups tend to follow a template. The group’s name contains an appeal for help, and the posts provide emotional stories, usually about terminally ill children whose suffering is illustrated by photos and videos that are posted on the page. Some of the posts are practically word-for-word copies of posts in other fraudulent groups. The only details that differ in each group are the child’s name, his or her diagnosis, and the name of the hospital where they are receiving treatment. Frequently the contact information and the bank transfer details are the same for multiple groups — which, by the way, is the most reliable indicator of the scam. New scam groups appear every month, and even though complaints shut them down quickly, some users are inevitably taken in and transfer money to the scammers.



Artificial intelligence and machine learning are the next frontiers for ETFs, says industry pro

The thinking behind this is that the collective wisdom of every smart beta ETF out there — including Goldman’s — is better than the mindset of any individual set of stock pickers. “You’re going to add the data to it that, quite frankly, a human brain just can’t digest,” said Tull. So, the key question becomes, is there any evidence that machine learning can actually outperform when it comes to picking stocks? Dave Nadig, who runs ETF.com, says there is. He points to the AI Powered Equity ETF (AIEQ), which has risen 17%, besting the S&P 500 this year. The fund, run by Equbot, uses both A.I. and IBM Watson to find opportunities in the market. “I think this is the next generation, frankly, of financial product development,” said Nadig. “Machine learning sounds big and scary, but all it is, is really just taking data and things you already know, how things perform, to generate rules – as opposed to hiring a bunch of CFAs to come up with those rules about what you’re going to buy and sell based on fundamentals.”


Not worried about unethical AI? You should be

Not worried about unethical AI? You should be image
Millennials (ages 18-38) are the age group most comfortable with AI, but they also have the strongest opinions that guard rails, or ethical practices, are needed. Whether it’s anxiety over AI, desire for a corporate AI ethics policy, worry about liability related to AI misuse or a willingness to require a human employee-to-AI ratio — it’s the youngest group of employers who consistently voice the most apprehension. For example, 21% of millennial employers are concerned their companies could use AI unethically, compared to 12% of Gen X and only 6% of Baby Boomers. “Our research reveals both employers and employees welcome the increasingly important role AI-enabled technologies will play in the workplace and hold a surprisingly consistent view toward the ethical implications of this intelligent technology,” continued Leeson. “We advise companies to develop and document their policies on AI sooner rather than later – making employees a part of the process to quell any apprehension and promote an environment of trust and transparency.”


Continuous Delivery for Machine Learning


Regardless of which flavour of architecture you have, it is important that the data is easily discoverable and accessible. The harder it is for Data Scientists to find the data they need, the longer it will take for them to build useful models. We should also consider they will want to engineer new features on top of the input data, that might help improve their model's performance. ... We then stored the output in a cloud storage system like Amazon S3, Google Cloud Storage, or Azure Storage Account. Using this file to represent a snapshot of the input training data, we are able to devise a simple approach to version our dataset based on the folder structure and file naming conventions. Data versioning is an extensive topic, as it can change in two different axis: structural changes to its schema, and the actual sampling of data over time. Our Data Scientist, Emily Gorcenski, covers this topic in more detail on this blog post, but later in the article we will discuss other ways to version datasets over time.


Data Governance and Machine Learning


The advanced AI system providers seem to think that only ML-powered solutions will ultimately satisfy both the regulatory and compliance requirements. Let’s take the example of the banking sector. Currently, the lack of consistency in data definition and quality is a serious deterrent to business operations across the enterprise. ML can help solve regulatory and compliance issues, specifically those related to Data Governance and data security and privacy, faced by different divisions within an enterprise. Now with General Data Protection Regulation (GDPR) requirements in most parts of the world, the advanced technologies are viewed as welcome transitions in global businesses. ... Gartner believes that by 2020, at least 50 percent of Data Governance policies will be driven by metadata. The greatest strengths of metadata are the implementation of accountability at every step, a common vocabulary, and an auditable process for compliance. Then ML technologies can move from the science labs to business halls. ... Surprisingly, during a survey of business executives, only 12 percent of the respondents acknowledged the presence of a defined Data Strategy in their enterprises.


Meet FPGA: The Tiny, Powerful, Hackable Bit of Silicon at the Heart of IoT

Image Credit: elen31 via Adobe Stock
In a CPU, the configuration of the chip is established at the chip foundry. Programming governs the movement of bits through the pre-set architecture. In an FPGA, though, the configuration is defined by hardware-definition language (HDL) that's loaded from storage — frequently static random access memory (SRAM) — at device boot time. This means the architecture of the device can be optimized for a particular task — and updated or upgraded as vulnerabilities are discovered or new capabilities are licensed. The ability to update the FPGA is seen as a positive security step because vulnerabilities can be addressed with new definitions. FPGAs are growing in popularity among device manufacturers because they fit more easily into an "agile" work process than do ASICs. While ASICs have to be defined in a manufacturing process that can take weeks or months in production quantities, FPGAs are defined by software that can be revised as frequently as releases can be dropped — many times a day during development.


How Robot CEOs Could Save Capitalism


In the wake of Big Tech, as questions about ethics dominate national conversations and the technology industry focuses on more ethical approaches to A.I., Wallis’ recommendation that the private sector fixes itself through the checks and balances of competition could prove to be a valuable and effective solution to rebuilding America’s middle class. While America’s largest companies begin to deliberate a new definition for the purpose of corporations, technologists and startups seeking to create ethical technology would be wise to explore ways A.I. can improve our economy while doing the least harm to the human workforce. By creating new technology to replace exorbitantly paid CEOs with A.I. that can do their jobs more efficiently while potentially offering more stability to companies, America’s corporations could benefit while the overall workforce remains intact. In turn, billions of dollars that would have gone to CEOs could be freed up to be redistributed through the economy directly, allowing companies to pay sensible wages that can keep pace with inflation without borrowing from the Federal Reserve.


Securing Your Kubernetes Pipeline

Image title
Automation is the answer to this challenge. It is nearly impossible to track changes manually, so you have to automate some parts of the process for maximum efficiency. This is done by integrating security compliance into the development and deployment processes. To be able to take this step, however, you have to define clear security and compliance policies first. Integrating security and compliance as early in the pipeline as possible is also highly recommended. This means securing not just the app or code, but also the CI/CD pipeline itself. Fortunately, there are more ways to achieve this. You can, for instance, use the IaC approach to create a standardized deployment stack. Since infrastructure is baked into the deployment package, it is much easier to make sure that a consistent cloud infrastructure is maintained. Another approach is adding (and enforcing) security policies, which we will get to in a second. Using tools like Kritis, Ops can enforce security policies at a much early stage in the development process. The policies govern how new updates and micro-services are deployed.


Using cloud, big data and biometrics to build the airport of the future


Ibbitson's vision of a joined-up airport system has been shaped by the position he holds within Dubai Airports. A focus on integration is inherent to this role: one year after joining the company as CIO, Ibbitson added responsibility for engineering services, taking on his current title of executive vice president for technology and infrastructure. His role is to bring IT and engineering together, helping the organisation make more efficient use of data across its business environment. Considerable progress has already been made and some of this preparatory work will form the foundation for the airport of the future. ... This platform, which supports a move toward multi-factor authentication, provides an integrated access point for accessing the organisation's data. Ibbitson's work on data integration forms part of his ongoing efforts to refine identity management processes and to introduce biometrics for the airport of the future. The aim is that airlines across DXB and DWC will be able to automate identity checks, allowing passengers to use their verified identity to move smoothly around the airports.


Darktrace unveils the Cyber AI Analyst: a faster response to threats

Darktrace unveils the Cyber AI Analyst: a faster response to threats image
Typically, a human analyst will spend half an hour to half a day investigating a single suspicious security incident. They will look for patterns, form hypotheses, reach conclusions about how to mitigate the threat and share the findings with the rest of the business. The AI cyber security company claim its new solution accelerates this process, continuously conducting investigations behind the scenes and operating at a speed and scale beyond human capabilities. And crucially, Darktrace say it can conduct expert investigations into hundreds of parallel threads simultaneously and instantly communicate its findings in the form of an actionable security narrative. “Cyber AI Analyst emulates the human thought processes that take place when a security analyst performs an investigation on a security incident. It’s like having an extra member of staff that is an expert in their field, and reports on issues in seconds, instead of hours.” 



Quote for the day:


"People leave companies for two reasons. One, they don't feel appreciated. And two, they don't get along with their boss." -- Adam Bryant


Daily Tech Digest - September 04, 2019

Vision 2020: Reimagining India over the next decade through AI


Government bodies and private players are already collaborating to pilot AI-led applications, even in domains which had hitherto been relatively untouched by cutting-edge innovations; these include areas such as agriculture, education and healthcare. The results generated thus far are extremely encouraging and indicative of the critical role that the technology can play in driving large-scale transformations across industries and sectors. Talking about specific use-cases, we have the Ministry of Home Affairs is working towards deploying India’s first Intelligent Traffic Management System (ITMS) in New Delhi. Aimed at addressing the city’s perennial traffic woes, the deployment will leverage AI-based smart traffic signals to monitor, automate and streamline the flow of traffic flow. Similarly, the Ministry of Defence has its own AI Task Force, which is advising the government about the possible offensive and defensive applications to optimize its military strategy and further enhance India’s position as an emerging superpower.



USB4 is ready: Twice as fast, smaller, and hitting devices in 2020


USB4 is the next major version of the USB, which gains a major speed boost thanks to Intel licensing its Thunderbolt 3 protocol to the USB Promoter Group on a royalty-free basis. This group includes Apple, HP, Intel, Microsoft, and Texas Instruments.  USB4 will enable 40Gbps speeds equivalent to Thunderbolt 3, which is currently found in high-end computers like the MacBook Pro and peripherals. That's twice as fast as the current USB 3.2. However, as noted by CNET's Stephen Shankland, many consumers are still using computers with earlier versions of USB that offer 5Gbps or 10Gbps. Thunderbolt 3's incorporation into USB4 should bring higher speeds to lower-end devices and peripherals. And those higher speeds will be useful for connecting multiple displays and getting data from external hard drives. The longer-term promise of the speedier USB4 is that device makers will stop using old rectangular USB-A ports and USB Micro B ports in favor of the newer USB-C connectors, which USB4 requires to work. The USB Implementers Forum told CNET that consumers could expect to see devices, including laptops, external hard drives, and dongles with USB4 support in the "second half of 2020".


Developer code reviews: 4 mistakes to avoid


Code reviews typically fall into one of two poor patterns. The first involves the reviewer not making any changes: "When there are no comments, that should terrify you," Presley said. "It leads to apathy—if you're rubber stamping, why do it at all?" The second is when a simple set of changes turns into a long, drawn-out process, when quick changes turn into inefficient meetings with too many people involved to actually solve problems, Presley said. "It's exhausting, and a waste of time for you and lots of other people," he added.  The simple goal of code reviews is to find bugs early on in the process, since bugs cost more the later they are discovered, Presley said. Several case studies back this up, he explained: For example, IBM found that each hour of code inspection prevents about 100 hours of related work, including support at QA. And after introducing code reviews, Raytheon reduced its cost of rework from 40% of the project cost to 20% of the cost, and the amount of money spent on fixing bugs dropped by 50%, Presley said.


Capabilities of attackers outpacing security leaders' ability to defend their organization: Study - CIO&Leader
This issue is compounded with limited resources, including lack of sufficient budget and skilled professionals as well as a threat attack surface that is quickly expanding and becoming more sophisticated. Because of this, security leaders understand it is critical to have the right strategies in place as they face an arms race between the capabilities of attackers and their own defense postures.The global survey polled CISOs across various industries about the biggest challenges they’re facing and strategies they’re putting in place to address these obstacles. “The Forbes Insights survey echoes the primary challenges we hear directly from Fortinet customers and prospects. Today's CISOs are tasked with the challenge of allocating limited funds and resources to the highest-return cybersecurity projects which can range from breach detection to response. These C-level security leaders must maximize security with finite resources, all while balancing strategic leadership responsibilities and tactical issues. Through the Fortinet Security Fabric, Fortinet is providing end-to-end security so that CISOs can navigate a rapidly changing cyber threat landscape day in and day out,” said John Maddison, EVP of Products at Fortinet.


Those adopting a multi-cloud approach were far more likely to have suffered a data breach over the past 12 months, the study shows, with 52% reporting breaches compared with 24% of hybrid-cloud users and 24% of single-cloud users. Companies with a multi-cloud approach are also more likely to have suffered a larger number of breaches, with 69% reporting 11-30 breaches compared with 19% of those from single-cloud and 13% from hybrid-cloud businesses. “When it comes to ensuring resilience and being able to source ‘best-in-class’ services, using multiple vendors makes sense,” said Reed. “However, from a security perspective, the multi-cloud approach also increases exposure to risk as there are a greater number of parties handling an organisation’s sensitive data. This is exactly why an eye must be kept on integration and a concerted effort be made to gain the visibility needed to counter threats across all different types of environments.” 


Identity access management  >  abstract network connections and circuits reflected in eye
When properly designed and deployed, predictive analytics can deliver deep insights into an array of commonplace and unique network issues, helping operators handle everything from policy setting and network control to security, says Rahim Rasool, an associate data scientist with Data Science Dojo, a data science training organization. To tackle security issues, for instance, predictive analytics can use anomaly detection algorithms to sniff out suspicious activities and identify possible data breaches. "These algorithms scan the behavior of networks working in the transfer of data and distinguish legitimate activity from others," Rasool explains. "With predictive analytics systems, the vulnerabilities in a network can be detected before a hacker group does and, subsequently, a defense mechanism can be drawn out." Another way predictive analytics can help organizations is by comparing trends to infrastructure capabilities and alert thresholds. "Almost all signals have an upper bound and a lower bound that are a result of the infrastructure's capabilities," says Gadi Oren, vice president of technology evangelism at LogicMonitor, which operates a cloud-based performance monitoring platform. 


Enterprise software will see the highest growth in 2019 and 2020 (9% and 10.9% respectively), while devices, communications services and data center systems will all recover somewhat in 2020 from declines in 2019, according to Gartner. The analyst firm sees the cloud spreading its tentacles further into the enterprise, encompassing areas like office suites, content services and collaboration services. "Spending in old technology segments, like data center, will only continue to be dropped," Lovelock said. ... IDC sees a 'natural cohesion' between traditional and new technologies: "Cloud and mobile enable rapid deployment and connectivity, while also cutting costs and complexity in legacy operations which allows businesses to focus on new digital innovation," says the analyst firm. Such synergies, along with the continuing need for professional services associated with the roll-out of digital transformation solutions, will mean that the impact of new technologies is "much bigger than revenues associated with discrete categories such as IoT sensors, 3D printers or drones," IDC says.


How to avoid CIO and CFO clashes over cloud spend

istock-503870180.jpg
Much of the cloud budgeting issues can be traced back to a disconnect between IT and finance, according to the report, which ultimately hurts the business. The IT department is often unaware of the burden cloud budgeting has on finance, the report found: 51% of finance respondents said they occasional overspend on cloud resources, compared to 37% of IT respondents, who are less aware. Some 68% of finance respondents said they are alerted to overspend only after it's too late, whereas 80% of IT respondents said they are alerted before the overspend takes place, the report found. Collaboration between IT and finance in a formal reporting capacity remains rare, as only 28% of professionals surveyed said this happens in their organization, the report found. The CIO and CFO play key roles in any organization, but the two have historically faced challenges working together over budgets and technology investments. Budgets tend to be the largest point of friction, as those are not typically a strength of the CIO, Khalid Kark, US CIO program research leader at Deloitte, told TechRepublic. On top of that, many times CIOs are investing in assets that may not have direct ROI.


Settling the edge computing vs. cloud computing debate

Settling the edge computing vs. cloud computing debate
The edge computing side that’s in the vehicle needs to respond immediately to changing data in and around the vehicle, such as an impeding crash or weather-related hazards. It does not make sense to send that data all the way to a central cloud server, where the decision is made to apply the brakes, and then back to the vehicle. By then you’ll have hit the semi. However, edge devices are typically much lower powered, with limited storage and compute capabilities. Deep learning processing and predictive analytics to determine the best approach to vehicle maintenance based on petabytes of historic data is best done on back-end cloud-hosted servers. See how that works? The edge computing market will continue to grow. A report on the topic, sponsored by software provider AlefEdge, pegs the size of the edge-computing market at more than $4 trillion by 2030. At the same time the cloud computing market will be 10 times that, and you’ll find the growth of both markets more or less proportional. Edge computing needs cloud computing, and the other way around. Indeed, public cloud computing providers will take advantage of the use of edge-based systems, providing small cloud service replicants, or smaller edge-based version of cloud services.


IoT security essentials: Physical, network, software

iot security ts
Where IoT is concerned, however, best security practices aren’t as fleshed out. Some types of IoT implementation could be relatively simple to secure – a bad actor could find it comparatively difficult to tinker with a piece of complex diagnostic equipment in a well-secured hospital, or a big piece of sophisticated robotic manufacturing equipment on an access-controlled factory floor. Compromises can happen, certainly, but a bad actor trying to get into a secure area is still a well-understood security threat. By contrast, smart city equipment scattered across a metropolis – traffic cameras, smart parking meters, noise sensors and the like – is readily accessible by the general public, to say nothing of anybody able to look convincing in a hard hat and hazard vest. The same issue applies to soil sensors in rural areas and any other technology deployed to a sufficiently remote location. The solutions to this problem vary. Cases and enclosures could deter some attackers, but they might not be practical in some instances. The same goes for video surveillance of the devices, which could become a target itself. The IoT Security Foundation recommends disabling all ports on a device that aren’t strictly necessary for it perform its function, implementing tamper-proofing on circuit boards, and even embedding those circuits entirely in resin.



Quote for the day:


Ineffective leaders don't react to problems, they respond to problems and learn. - Danny Cox


Daily Tech Digest - September 03, 2019

Cloud 2.0: A New Era for Public Cloud

Image: natali_mis - stock.adobe.com
Security remains a primary concern for companies moving to the cloud, even though public cloud providers offer security capabilities like data classification tools and even whole cloud environments tailored to meet industry-specific specifications - both of which Deloitte names as vectors to cloud progress. “A lot of times, one of the first things companies do in the cloud is migrate existing apps, workloads and the data they operate on to the cloud. The security model in the cloud is rather different, and sometimes data and assets need to be secured in a more granular way, so data classification is part and parcel of a prudent migration to the cloud,” Schatsky says. ... “There are apps written in the old mode of app dev and to convert them to the world of cloud takes time, effort and a willingness to do so. Plus, it takes skill. It’s not a nontrivial task. Those are the things that are slowing the process of moving everything to the cloud.” Schatsky agrees. “For a lot of companies, they’re dealing with incubating the skills they need to take full advantage of the cloud. When companies start by moving wholesale to the cloud, the biggest need they have is to just propagate the impact on their workflow and operating models that the cloud enables. You can’t rush that. It’s a human capital thing that takes time,” he says.


The Path to Modern Data Governance

It is worth noting that the longest list of activities is the people list. This is typical, as having all of the right people, engaged in the right ways, is critical to data governance success. The processes and methods lists are tied for 2nd longest. People, processes, and methods are at the center of effective data governance. The example shown in figure 3 illustrates the idea that we have selected a subset of the activities – not all of them – for initial planning. (The color coding here is different, mapping activities to projects.) To make modernization manageable and practical, it is important to make conscious decisions about what NOT to do. The selected activities are organized based on affinity – they seem to fit together and make sense as a project. They are also organized and based on dependencies – what makes sense to do in what sequence. Note here that the activities in a single project don’t necessarily all come from the same layer of the framework. The bottom sequence in green, for example, includes two activities from the culture layer, one from the methods layer, and one from the people layer.


Industry calls for joint participation to cement Australia's digital future


The report outlined how universities and publicly funded research agencies needed to reshape their research culture to safeguard and strengthen the country's digital workforce and capability pipeline, by placing substantially higher emphasis on industry experience, placements, and collaborations in hiring, promotion, and research funding. At the same time, there are also recommendations about how to lift the skills of teachers on ICT-related topics, and the need to increase diversity, particularly women, while removing structural barriers that cause the loss of knowledge, talent, and educational investment from the ICT and engineering sectors. "Attracting high-quality international students to, and retaining them in, Australia after they graduate is a good way to expand the diversity of the ICT skill base and to promote greater international engagement, not least of which with the home countries of those people. We should make it easier to keep such people after the end of their formal studies," the report said. Another recommendation the report made included the need for government to undertake a future-readiness review for the Australian digital research sectors, as well as to monitor, evaluate, and optimise the applied elements of the federal government's National Innovation and Science Agenda and the Australia 2030 Plan.


Is the tech skills gap a barrier to AI adoption?


Without the right workforce, organisations simply cannot proceed to tackle the technical challenges existing in a data-driven industry. This can help to reverse the inconsistencies and set-backs with data-led AI projects. With the right analytics platform, data capabilities can be put in the hands of the business experts who not only have the context of the questions to solve but the data sources needed to deliver insights at speed. Trained data scientists will still be required, but the shortage of them does not mean all activity, or some level of a project, can’t be tested and iterated and progressed. Existing employees should be still able to perform some levels of data tasks despite not being experts. They are in the line-of-business, close to the questions, the data, and the leaders who need insight. Linking up data insight for people with the vital business knowledge is paramount to making the most of data analytics and fuel business progress. What’s more, getting data in the hands of the people is crucial in order to democratise AI and make advanced analytics more accessible to everyone, rather than locked away by a ‘priestly caste’ of data scientists.


USBAnywhere Bugs Open Supermicro Servers to Remote Attackers


USBAnywhere stems from several issues in the way that BMCs on Supermicro X9, X10 and X11 platforms implement virtual media, which is an ability to remotely connect a disk image as a virtual USB CD-ROM or floppy drive. “When accessed remotely, the virtual media service allows plaintext authentication, sends most traffic unencrypted, uses a weak encryption algorithm for the rest and is susceptible to an authentication bypass,” according to the paper. “These issues allow an attacker to easily gain access to a server, either by capturing a legitimate user’s authentication packet, using default credentials, and in some cases, without any credentials at all.” Once connected, the virtual media service allows the attacker to interact with the host system as a raw USB device. This means attackers can attack the server in the same way as if they had physical access to a USB port, such as loading a new operating system image or using a keyboard and mouse to modify the server, implant malware, or even disable the device entirely. “Taken together, these weaknesses open several scenarios for an attacker to gain unauthorized access to virtual media,” according to Eclypsium.


Risk mitigation is key to blockchain becoming mainstream

A solution in search of a problem, blockchain is often associated with cryptocurrency, which is, arguably, the single worst application of the “immutable” ledger that defines the technology. Supply chains are a much better use, due to the high levels of integrity and availability provided by a blockchain. A blockchain is essentially a piece of software, run on multiple computers (or nodes) that work together as participants of a distributed network to produce a record of transactions submitted to that network in a ledger. The ledger is made of blocks that are produced when nodes run complex cryptographic functions, which are chained together to produce a blockchain. Nodes perform validation of each block that is created to verify its integrity and ensure it has not been tampered with. If a majority of nodes validate the block, consensus is reached, confirming the recorded transactions to be true. The block is added to blockchain and the ledger is updated.


MDM can tame and monetize IoT data explosion


To achieve success at large scale, Bonnet says a company's MDM system must allow for an agile delivery process. "It is almost impossible to be sure about the data structure, semantics, and governance process a company needs to start, and the prediction for the future is so hard to establish, even impossible," he laments. The inability to know the future is the key reason for the agility mindset. This is a vital awareness. "If the MDM system is not agile enough, then all the existing systems running in a company could be slowed in their ability to change. There is also a potential for poor integrating with the MDM system which will not improve the data quality, and may have the opposite effect," he continues. He suggests that checking two points: first, the MDM system must be agile, without a rigid engineering process that could delay the delivery of the existing systems. This is what is called a "model-driven MDM" for which the data semantics will drive a big part of the expected delivery in an automatic process.


Data-Driven Design Is Killing Our Instincts


Design instinct is a lot more than innate creative ability and cultural guesswork. It’s your wealth of experience. It’s familiarity with industry standards and best practices. You develop that instinct from trial and error — learning from mistakes. Instinct is recognizing pitfalls before they manifest into problems, recognizing winning solutions without having to explore and test endless options. It’s seeing balance, observing inconsistencies, and honing your design eye. It’s having good aesthetic taste, but knowing how to adapt your style on a whim. Design instinct is the sum of all the tools you need to make great design decisions in the absence of meaningful data. ... Not everything that can be counted counts. Not everything that counts can be counted. Data is good at measuring things that are easy to measure. Some goals are less tangible, but that doesn’t make them less important. While you’re chasing a 2% increase in conversion rate you may be suffering a 10% decrease in brand trustworthiness. You’ve optimized for something that’s objectively measured, at the cost of goals that aren’t so easily codified.


How to Use Chaos Engineering to Break Things Productively

There are a number of variables that can be simulated or otherwise introduced into the process. These should reflect actual issues that might occur when an app is in use and prioritized by the likelihood of occurrence. Problems that can be introduced include hardware-related issues like malfunctions or a server crash as well as process errors related to sudden traffic spikes or sudden growth. For example, what might happen during the whole online shopping experience if a seasonal sale results in a larger than expected customer response? You can also simulate the effects of your server being the target of a DDoS attack that's designed to crash your network. Any event that would disrupt the steady state is a candidate for experimentation. Compare your results to the original hypothesis. Did the system perform as anticipated, beyond expectations, or produce worse results? This evaluation shouldn't be undertaken in a vacuum, but include input from team members and services that were utilized to conduct the experiment.



Quote for the day:


"True leaders bring out your personal best. They ignite your human potential" -- John Paul Warren


Daily Tech Digest - September 02, 2019


Big Four and Blockchain: Are Auditing Giants Adopting Yet?

Big Four and Blockchain: Are Auditing Giants Adopting Yet?At this point, all of the Big Four companies have at least demonstrated some interest in blockchain, albeit their approaches tend to differ. Some companies, like Deloitte, have been mostly researching how this technology has affected the general market, while EY, for instance, has focused on releasing software solutions tailored for the needs of cryptocurrency businesses.  Such diversity can be explained by the very nature of those companies — being professional services networks, they offer a variety of services, including audit, tax, consulting, enterprise risk and financial advisory. ... “Because the Big Four work in such a wide scope of sectors, they are unable (or unwilling) to dedicate serious time to blockchain. This makes sense, given that they cannot invest in every new technology set which comes along (although we view blockchain as different). One key thing to note is that many of the big four only got into blockchain when Crypto projects began using them to show more transparency. The Big Four are known to only get involved with something when their client base is using it, blockchain was and is no exception.”


Social media and enterprise apps pose big security risks


“Today’s organisations are heavily dependent on applications, and employees will often use them to perform key parts of their job,” said Ollie Sheridan, security engineer for Europe, the Middle East and Africa at Gigamon. “However, it also means these applications can have access to sensitive corporate data, which could put an organisation at risk if it fell into the wrong hands. “Organisations should therefore treat applications as part of their own network and aim to have complete visibility of their functions. Security should always be paramount when new applications are being deployed.” Scott Crawford, a security analyst at 451 Research, told Computer Weekly in June 2018 that security threats arise because companies are using a diverse range of applications. Often, IT and security teams do not have the resources or time to identify and respond to attacks, he said. The Gigamon survey also asked IT security professionals which applications they believe bring in the most malware to the enterprise.


The Psychological Reasons of Software Project Failures

Image 1
Coding is not a challenge. In fact, code is the last thing anybody is willing to pay for (though, ironically, it is the most important thing that gets produced in the end). The real challenge, and the real duty of a programmer, is solving problems that customers face, most likely with code but not necessarily. These problems are usually only partially “technical”, often sociological, often complex, often wicked. As problem complexity grows, the required effort, intelligence, knowledge and dedication to solve it grows as well, sometimes exponentially. Recognizing complexity, confining it and minimizing it is the ultimate goal of a programmer. This raises the bar so high that an average person might fail to present the sufficient personal qualities required for the job, and turn out to be relatively stupid. As David Parnas states it: “I have heard people proudly claim that they have built lots of large complex systems. I try to remind them that the job could have been done by a small simple system if they had spent more time on "front-end" design. Large size and complexity should not be viewed as a goal.”


Beware this insidious word in the workplace


What is the most important aspect of leadership? Because of its nature, it’s possible to begin a sentence with “Leadership is about…” and choose from dozens of applicable words to finish it, all of which would prompt nods of agreement. But my vote would be for trust as the most important among them. If leaders consistently undermine their people, they will also undermine the expectation that their people will do the right thing, whatever the context. If that expectation goes away, so, too, does motivation. Another key to leadership, a close second for me after trust, is respect — not just because the leader needs to earn respect, but because the leader must respect the people who work for him or her. When I interviewed the Hollywood executive Jeffrey Katzenberg years ago, he shared a key insight that stayed with me. “By definition, if there’s leadership, it means there are followers, and you’re only as good as the followers,” he said. “I believe the quality of the followers is in direct correlation to the respect you hold them in. It’s not how much they respect you that is most important.”


Why do DBAs dislike loops?


So why do data people tend to avoid (or even actively dislike) loops? (Can you say cursor anyone?). Scaleability! Loops just don’t scale well. A loop that is fast at 100 loops is going to take twice as long at 200 loops, five times as long at 500 loops and one hundred times as long at 10,000 loops. That’s a problem in the database world when at 10,000 rows a table is still considered small and depending on your experience a mid-sized table might be 1,000,000 rows or more. As in all things I like examples, so here’s a simple one. I’m creating a table with an identity column and a date column. I’m going to record times spent updating each row one at a time and just updating the entire table. Then I’m going to add 10 rows and run again, 10 rows and run again, etc until I have 7500 rows. Quick note to everyone who reads this and thinks “But …”. I’m aware this is a really simple example. If you have buts that you think will significantly change the outcome feel free to run a test yourself and if by some odd chance feel free to put the results in the comments, or even better blog them and link the blog in the comments


DigitalOcean Adds Managed MySQL and Redis Services

Both Managed MySQL and Redis options support up to two standby nodes that take over automatically if the primary node fails. Managed MySQL customers can provision read-only nodes in additional geographic regions for horizontal scaling. Managed MySQL customers also get access to monitoring and proactive alerting functionality, and the ability to fork an entire cluster based on a specific point in time. Bearfield says that Managed Redis will also get database metrics and monitoring upon general availability. Both the Managed MySQL and Redis offerings come with two cluster types: single node or high-availability. The single node clusters start at $15 per month and provide 1 GB of memory, 1 vCPU, and 10 GB of SSD disk storage. As evident by the name, the single node clusters aren't highly available, but do support automatic failover. The high availability clusters offer up to two standby nodes and begin at $50 per month. The single node plan offers database instances as large as 32 GB of RAM, 8 vCPUs, and 580 GB of storage.


CISOs turn to AI, detection, response and education


CISOs believe that AI, like machine learning, and analytics relieve IT teams of monotonous tasks, so they can focus on business-critical jobs such as identifying anomalous behaviour in their networks and responding to threats quickly. According to the survey, security leaders are currently allocating an average of 36% of their security budget to response. However, most would like to shift their resources from prevention to bolster detection and response capabilities and increase response investments to 40% of their budget. “There is a growing realisation that breaches are inevitable, and that strong detection and response practices are a greater priority,” the report said. CISOs believe talent and training constraints have a significant impact on their organisations, the survey found, with CISOs paying more attention to educating their own employees on best practices and building cyber security awareness in order to prevent and reduce internal threats.


People And Machines – A Workplace Reality

For people to seize this kind of opportunity they must be able to embrace change, as well as having access to learning and reskilling programmes to help them on their journey. As mentioned above, this is one area where HR cannot afford to drop the ball. Similarly, another crucial factor to consider is ensuring that all employees are able to benefit on an equal basis. “We have to ask ourselves,” says Cable, “if we don’t act and invest with new technology, who might be left behind? 15 per cent of organisations were saying they didn’t see any need to invest in new technology. Those organisations are essentially taking a back seat, and choosing not to take advantage of all the new things around us.” Worryingly, Cable observes, an area where that investment is least likely to be made is HR. HR departments tend to have a slightly more female workforce. Is this therefore another inhibitor to women being able to contribute in technology-enabled organisations? It’s a subtle point, but this is certainly something that HR – and organisations in general – should be aware of.


Software Deployment Strategy: How to Get It Right the First Time

Software Deployment Strategy: How to Get It Right the First Time
There is an intense focus today on customer experience (CX). Ensuring that your website visitors have access to the information they want, and they can find it quickly and easily, is just part of your overall CX. This makes your customer-facing technologies – the ones that power your website or mobile app – critical investments, even though they may not carry the price tag of an ERP system. Even the smallest investments need to be vetted to make sure they work with existing infrastructure and processes. One small piece of website tech that ends up degrading your online CX can cost your organization millions in a very short amount of time. There’s simply too many choices just a click away today if something isn’t working properly. Differentiating technologies are also more likely to be customized than an application like ERP, which can often use a number of out-of-the-box processes. These are areas where a software deployment strategy involving your EA team can help guide the software purchase and deployment process.


Figure 1. Adaptive Attack Protection Architecture
To help determine which combination of cloud email security products might work best for any organization, we believe, a thorough analysis of existing email security products to understand the current solution’s capabilities completely. Gartner recommends, “Leverage incumbent email security products by verifying and optimizing their capabilities and corresponding configurations. This will serve as the start of a gap analysis to determine where supplementation or replacement may be required.” The Cisco Threat Analyzer for Office 365 quickly detects security gaps in Office 365 email inboxes to provide visibility into threats that may have gone undetected and identify security vulnerabilities. In addition, to support this growing cloud email platform user base, Cisco Email Security now has data centers with global coverage located in North America, Europe and Asia. These locations allow for local customers to satisfy data access and sovereignty requirements in their specific regions and provide the confidence that their data will remain within region. For those install base customers using an on premise or hybrid solution, this global coverage gives them the peace of mind for migrating from on premise to cloud email.



Quote for the day:

"Tenderness and kindness are not signs of weakness and despair, but manifestations of strength and resolution." -- Khalil Gibran

Daily Tech Digest - September 01, 2019

Software Ate The World, Now AI Is Eating Software

AI Is Eating Software
The extent in which Andreessen’s cherished software companies are weaving AI into their products is however often limited. Instead, a new slew of start-ups now incorporates an infrastructure based around the above mentioned AI-facilitating processes from their very foundation.  Driven by an increase in efficiency, these new companies use AI to automate and optimize the very core processes of their business. As an example, no less than 148 start-ups are aiming to automate the very costly process of drug development in the pharmaceutical industry according to a recent update on BenchSci. Likewise, AI start-ups in the transportation sector create value by optimizing shipments, thus vastly reducing the amount of empty or idle transports. Also, the process of software development itself is affected. AI-powered automatic code completion and generation tools such as TabNine, TypeSQL and BAYOU, are being created and made ready to use.



The disruption effort began after Avast in March traced back a rise in stealthy cryptocurrency mining infections to variants of a worm called Retadup, written in both AutoIt and AutoHotkey scripts. Researchers began studying the command-and-control communications being used to control infected endpoints, or bots, says Jan Vojtesek, a malware researcher at Avast, in a research report. "After analyzing Retadup more closely, we found that while it is very prevalent, its C&C communication protocol is quite simple," he says. "We identified a design flaw in the C&C protocol that would have allowed us to remove the malware from its victims' computers had we taken over its C&C server." Avast alerted France's national cybercrime investigation team, C3N, that servers in France appeared to be hosting the majority of the command-and-control infrastructure for distributing and controlling the Retadup worm - in other words, self-replicating malware. Avast also shared a technique that it thought might allow authorities to neutralize existing infections.


Unlike some companies where departmental work groups are not always accessible to those outside those groups, Facebook employees can participate in any group. “Most of those groups are what we call open QA. What that means is that people outside of those groups can also see the information. And you’ll be surprised how this tackles a number of challenging problems as the company grows,” Nguyen said. For one, open work groups will help to prevent duplication of projects, since developers can see what other teams are doing, and avoid building the same things. In cases where duplicate projects are already being built, Nguyen would step in to bring the teams together in an open dialogue. “There were a few teams within infrastructure and Instagram that were building different technologies for logging of data,” Nguyen recalled. “One of the engineers at Instagram escalated [the issue] to me and I set up a meeting for them to work together.”


4 Cybersecurity Professionals That Can Benefit from Threat Intelligence

The first layer of defense that most organizations rely on is their own security operation center (SOC). Whether outsourced or in-house, security operations analysts need to possess a broad set of skills to be effective. This includes capabilities in log monitoring, penetration testing, incident response, access management, and more. Each one of these tasks requires a different group of systems and solutions to work well, which are usually not integrated. This means that SOCs often have to deal with unending alerts and big data that may not come with much context. Threat intelligence enriches alert management. It provides context to help SOCs know which alerts need to be prioritized. Some threat intelligence platforms readily offer this kind of automation using machine learning (ML) or similar technologies. Just like SOCs, incident response teams face the challenge of getting information that lacks context. They are also bombarded with numerous alerts from their security information and event management (SIEM) solutions and so are forced to choose which ones to prioritize.


Cloud Storage Is Expensive? Are You Doing it Right?


A common solution, adopted by a significant number of organizations now, is data repatriation. Bringing back data on premises (or a colocation service provider), and accessing it locally or from the cloud. Why not? At the end of the day, the bigger the infrastructure the lower the $/GB and, above all, no other fees to worry about. When thinking about petabytes, there are several ways to optimize and take advantage of which can lower the $/GB considerably: fat nodes with plenty of disks, multiple media tiers for performance and cold data, data footprint optimizations, and so on, all translating into low and predictable costs. At the same time, if this is not enough, or you want to keep a balance between CAPEX and OPEX, go hybrid. Most storage systems in the market allow to tier data to S3-compatible storage systems now, and I’m not talking only about object stores – NAS and block storage systems can do the same. I covered this topic extensively in this report but check with your storage vendor of choice and I’m sure they’ll have solutions to help out with this.


The First Artificial Memory Has Been Successfully Created and Implanted

Previous research had shown that it was possible to partially transfer memories from one rodent to another via reproducing the electrical activity associated with a specific memory in one mouse and jolting it into the brain of another mouse. This new experiment is different. This time the memory was created completely artificially from the ground up. This consisted of a few parts. First, they used a technique called optogenetics. This involves fiber optic cables surgically implanted into the olfactory region of the mice’s brain so that light can be used to turn on proteins associated with specific smells. To do that, the mice had to be genetically engineered to only produce the light-sensitive protein in the region associated with acetophenone—AKA the scent of cherry blossoms. Now they could artificially create the scent of cherry blossoms in the brain of a mouse. So we’re already into some wacky stuff, but don’t worry. It gets wackier.


Semi-supervised learning explained

Semi-supervised learning explained
Self-training uses a model’s own predictions on unlabeled data to add to the labeled data set. You essentially set some threshold for the confidence level of a prediction, often 0.5 or higher, above which you believe the prediction and add it to the labeled data set. You keep retraining the model until there are no more predictions that are confident. This begs the question of the actual model to be used for training. As in most machine learning, you probably want to try every reasonable candidate model in the hopes of finding one that works well. Self-training has had mixed success. The biggest flaw is that the model is unable to correct its own mistakes: one high-confidence (but wrong) prediction on, say, an outlier, can corrupt the whole model. Multi-view training trains different models on different views of the data, which may include different feature sets, different model architectures, or different subsets of the data. There are a number of multi-view training algorithms, but one of the best known is tri-training.


Sprint Reviews With Kanban

Kanban is sometimes thought of as a soft option because “flow” is misinterpreted as “whatever gets delivered gets delivered”. A team will start with what it is, realistically, doing now. There is no need to vamp Sprints. The odious Sprint Goal and the contrived forecast of work in the Sprint Backlog are dispensed with. It looks as if the team can no longer be held hostage to fortune. In Kanban there is no Great Lie to be fabricated about a planned Sprint outcome, and, it is assumed, there is no great commitment that can hang over team members’ heads like the Sword of Damocles. What possible use for a monstrous Sprint Review can there be? Instead, there ought to be a succession of mini-reviews with the Product Owner as each item is completed. Having mini-reviews can be useful and timely, and they are all very well. In truth, however, a professional Kanban team will not escape from making a serious commitment, nor would a team ever seek to do so. For one thing, its members will need to understand and define a commitment point in their workflow.


Hackers Hit Unpatched Pulse Secure and Fortinet SSL VPNs


Based on their count of recent publicly exposed common vulnerabilities and exposures in SSL VPNs, it appeared that Cisco equipment would be the riskiest to use. To test that hypothesis, the researchers began looking at SSL VPNs and found exploitable flaws in both Pulse Secure and Fortinet equipment. The researchers reported flaws to Fortinet on Dec. 11, 2018, and to Pulse Secure on March 22. ... In response, Fortinet released a security advisory on May 24 and updates to fix 10 flaws, some of which could be exploited to gain full, remote access to a device and the network it was meant to be protecting. In particular, it warned that one of the flaws, "a path traversal vulnerability in the FortiOS SSL VPN web portal" - CVE-2018-13379 - could be exploited to enable "an unauthenticated attacker to download FortiOS system files through specially crafted HTTP resource requests." Such FortiOS system files contain sensitive information, including passwords, meaning attackers could quickly give themselves a way to gain full access to an enterprise network.


How to bolster IAM strategies using automation


Litton argues that automation is also important for protecting critical data assets. “An example of this is when an employee leaves an organisation or a technology supplier relationship ends,” he says. “Automation can ensure that their accounts do not remain in an active state, thus eliminating a potential avenue through which bad actors can access data. When implemented properly, automated IAM solutions can also identify orphan accounts automatically and alert system owners.” Identity management systems comprise users, applications and policies, all of which govern how people are able to use software. Litton says automated IAM systems can fully automate identity creation at scale; automatically manage user access; apply role- and attribute-driven policies; and completely remove the need for passwords, helping to improve the user experience, while decreasing the helpdesk support burden.



Quote for the day:


"Leaders keep their eyes on the horizon, not just on the bottom line." -- Warren G. Bennis


Daily Tech Digest - August 31, 2019

AI ‘Emotion Recognition’ Can’t Be Trusted


If emotion recognition becomes common, there’s a danger that we will simply accept it and change our behavior to accommodate its failings. In the same way that people now act in the knowledge that what they do online will be interpreted by various algorithms (e.g., choosing to not like certain pictures on Instagram because it affects your ads), we might end up performing exaggerated facial expressions because we know how they’ll be interpreted by machines. That wouldn’t be too different from signaling to other humans. Barrett says that perhaps the most important takeaway from the review is that we need to think about emotions in a more complex fashion. The expressions of emotions are varied, complex, and situational. She compares the needed change in thinking to Charles Darwin’s work on the nature of species and how his research overturned a simplistic view of the animal kingdom. “Darwin recognized that the biological category of a species does not have an essence, it’s a category of highly variable individuals,” says Barrett. “Exactly the same thing is true of emotional categories.”


uncaptioned
With customer protection in mind, regulators are staying ahead of this technology and introducing the first wave of AI regulations meant to address AI transparency. This is a step in the right direction in terms of helping customers trust AI-driven experiences while enabling businesses to reap the benefits of AI adoption. This first group of regulations relates to the understanding of an AI-driven, automated decision by a customer. This is especially important for key decisions like lending, insurance and health care but is also applicable to personalization, recommendations, etc. The General Data Protection Regulation (GDPR), specifically Articles 13 and 22, was the first regulation about automated decision-making that states anyone given an automated decision has the right to be informed and the right to a meaningful explanation. According to clause 2(f) of Article 13: "[Information about] the existence of automated decision-making, including profiling ... and ... meaningful information about the logic involved [is needed] to ensure fair and transparent processing."


Apple iPhones Hacked by Websites Exploiting Zero-Day Flaws

Apple iPhones Hacked by Websites Exploiting Zero-Day Flaws
Google reported two serious flaws - CVE-2019-7287 & CVE-2019-7286 - to Apple on Feb. 1, setting a seven-day deadline before releasing them publicly, since they were apparently still zero-day vulnerabilities as well as being used in active, in-the-wild attacks. Apple patched the flaws via iOS 12.1.4, released on Feb. 7, together with a security alert. Hacking modern operating systems - including iOS - typically requires chaining together exploits for multiple flaws. In the case of mobile operating systems, for example, attackers may require working exploits that allow them to initially access a device - typically via a WebKit-based browser - and then to escape sandboxes and jailbreak the device to install a malicious piece of code. All told, Google says that it counted five exploit chains that made use of 14 vulnerabilities: "seven for the iPhone's web browser, five for the kernel and two separate sandbox escapes." The identified exploits could have been used to hack devices running iOS 10, which was released on Sept. 13, 2016, and nearly every newer version of iOS, through to the latest version of iOS 12.


The challenge: creating a better future of work


With appropriate policies, any job can become a good job. There’s nothing about today’s low-wage service jobs, home-care work and gig jobs that means we can’t make them good jobs, like we have done before. The jobs of the future are upon us today. We can’t turn the clock back and resurrect all of the manufacturing jobs that have disappeared. But we can create the good jobs of the future. Rather than wondering what kinds of jobs we will be doing for robot bosses, we need to decide what we want work and jobs to be doing for us, our families and our communities in the future. The state can take the lead in charting a new path forward that works for all Californians. In an executive order creating a Future of Work Commission, Gov. Gavin Newsom emphasized the need to “modernize the social compact between the government, the private sectors and workers.” We can begin to formulate policies that set guardrails on how robots and artificial intelligence can be used to improve the quality of jobs, not just replace them. We can look beyond upskilling workers to upgrading jobs.


Electronic word-of-mouth can make or break a product launch


eWOM can also affect product strategy. Executives at GM scrapped plans for a type of Buick crossover after reading tweets criticizing the design. And beauty products retailer Sephora canceled the release of a Starter Witch Kit — an innovative product that combined perfumes with tarot cards and a crystal ball, among other items — after critics accused the brand of trivializing witchcraft as a religious practice. So what’s the key to getting product launches to go viral, generating positive eWOM across the Internet? Researchers have yet to connect the dots between innovativeness, a firm’s marketing strategies, and the sentiments expressed through eWOM channels, particularly as they relate to the success of new products. But a new study aims to make those connections and provides suggestions for creating effective viral marketing campaigns for new products. To arrive at their findings, the authors conducted a two-phase study. The first phase analyzed a data set of millions of eWOM posts on message boards, forums, and social media platforms such as Facebook, Twitter, and Instagram.


Why 2-factor authentication isn't foolproof


Two-factor authentication is certainly more effective than just a username and password. But the risks of attack and data breach remain if 2FA is poorly implemented, especially in cases where appropriate checks aren't included before the authentication challenges are presented. Password leakage and credential misuse is on the rise, and attackers are continuously devising new ways to improperly access organizations and systems. We need to embrace evolving approaches to identity security that improves security posture while simultaneously keeping a simple user experience. Modern, adaptive, risk-based approaches that leverage real-time metadata and threat detection techniques have to be the standard. Intelligence needs to be built into the authentication process that leverage dynamic controls in real time. They also need the ability to block authentication requests when they are considered to be high risk. These risk factors include detecting anonymous proxy usage, detection of malicious IP addresses, dynamic geo-controls, device controls, and analyzing for unusual access patterns or overly privileged accounts.


Rating IoT devices to gauge their impact on your network

IoT | Internet of Things  >  A web of connected devices.
Devices with low-bandwidth requirements include smart-building devices such as connected door locks and light switches that mostly say “open” or “closed” or “on” or “off.” Fewer demands on a given data link opens up the possibility of using less-capable wireless technology. Low-power WAN and Sigfox might not have the bandwidth to handle large amounts of traffic, but they are well suited for connections that don’t need to move large amounts of data in the first place, and they can cover significant areas. The range of Sigfox is 3 to 50 km depending on the terrain, and for Bluetooth, it’s 100 meters to 1,000 meters, depending on the class of Bluetooth being used. Conversely, an IoT setup such as multiple security cameras connected to a central hub to a backend for image analysis will require many times more bandwidth. In such a case the networking piece of the puzzle will have to be more capable and, consequently, more expensive. Widely distributed devices could demand a dedicated LTE connection, for example, or perhaps even a microcell of their own for coverage.


Addressing Large, Complex Unresolved Problems With AI

uncaptioned
Tracking the demand for skills in the market and the educational infrastructure available to supply those skills, through a Skills Repository. This will help keep education concurrent with current market demands and ensure much better alignment between academia and corporates; Automate routine, time-consuming tasks – from creating and grading test papers, developing personalized benchmarks for each student, identifying gaps in student development, tracking aptitude and attentiveness within each subject, and enabling teachers to focus on curriculum development, coaching and mentoring, and improving behavioral and personality aspects of students; ... Review and summary-creation of long drawn cases and their history can be done through natural language processing and voice recognition; Routing Right-to-Information and governance-related citizen requests through intelligent bots, thus making it more efficient to get critical information; Employ Anomaly Detection frameworks to surface fraudulent transactions – especially among land deals.


TrickBot Variant Enables SIM Swapping Attacks: Report

TrickBot Variant Enables SIM Swapping Attacks: Report
The operators of this version of TrickBot are able to intercept a victim's PIN as well as other credentials when they attempt to log onto the websites of the three wireless carriers, according to the report. This allows for a so-called SIM attack, which involves taking a victim's phone number and porting it to another SIM card that is then under the control of the attackers. Then an attacker can collect one-time passwords or trick telecom employees into giving out information about the victim through social engineering techniques. These moves create opportunities for further attacks, such as account takeover schemes. "Interception of short message service (SMS)-based authentication tokens or password resets is frequently used during account takeover fraud," the SecureWorks report notes. Over the past year, SIM swapping has been used in the U.K. for attempted account takeover attacks that have targeted banks and other financial institutions. Account takeover attacks can pave the way for credential stuffing - a technique used to guess passwords and users names to steal data


Great Global Meetings: Navigating Cultural Differences

Team members will know their cultural differences are getting in the way, but they don’t have a safe or honest way to talk about them. Without a chance for team members to work through these differences, a collision course is inevitable. By missing the opportunity to openly explore how cultural differences affect its ability to collaborate, a team may become mired in cultural misunderstandings and handcuffed by invalid assumptions. Many may be afraid of saying the wrong thing or asking a question that may be offensive. Global team leaders should initiate candid discussions about cross-cultural differences as early as possible, ideally when a new team is forming. Cultural differences will affect collaboration one way or another, so it’s best to have team members familiarize themselves with each others’ cultures right up front, so they can decide how they want to work together moving forward. Allocate time for checkpoints at key junctures in the conversation. Pause periodically to let all participants absorb what’s just been said. Some people—Americans in particular—often feel compelled to puncture silence with a comment.



Quote for the day:


"Tend to the people, and they will tend to the business." -- John C. Maxwell