Daily Tech Digest - November 11, 2018

broken web app hacker
Web applications are the most visible front door to any enterprise and are often designed and built without strong security in mind. Stressing out over hardware vulnerabilities like Spectre or Meltdown is fun and trendy, but while you're digging a moat around your castle someone is prancing across the drawbridge using SQL injection (SQLi) or cross-site scripting (XSS). The OWASP Broken Web Applications Project comes bundled in a virtual machine (VM) that contains a large collection of deliberately broken web applications with tutorials to help students master the various attack vectors. From trivial to more difficult, the project is designed to lead the user to a better understanding of web application security. The OWASP Broken Web Applications Project includes the appropriately named Damn Vulnerable Web Application, deliberately broken for your pentesting enjoyment. For maximum lulz, download OWASP Zed Attack Proxy, configure a local browser to proxy traffic through ZAP, and get ready to attack some damn vulnerable web applications.



Emotional skill is key to success

According to Susan David, emotional agility is about adaptability, facing emotions and moving on from them. It is also the ability to master the challenges life throws at us in an increasingly complex world. She added that while emotional intelligence is not values-focused, emotional agility is. "Women do have some advantages in the domain of emotional agility," she said. "When I go into organisations and look at hotspots or business units that are extremely high functioning, what we find is that the most important predictor of enabling these units is what I call 'individualised considerations'. That means leaders who are able to see the individual as an individual and this has diversity at its core. "These leaders do not stereotype or exclude," she added. "Of course, this doesn't work always in practice and there is a lot of work to be done in this regard in organisations and businesses."


Hybrid Blockchain- The Best Of Both Worlds

Hybrid Blockchain
The hybrid blockchain is best defined as the blockchain that attempts to use the best part of both private and public blockchain solutions. In an ideal world, a hybrid blockchain will mean controlled access and freedom at the same time. The hybrid blockchain is distinguishable from the fact that they are not open to everyone, but still offers blockchain features such as integrity, transparency, and security. As usual, Hybrid blockchain is entirely customizable. The members of the hybrid blockchain can decide who can take participation in the blockchain or which transactions are made public. This brings the best of both worlds and ensures that a company can work with their stakeholders in the best possible way. We hope that you got a clear view from the hybrid blockchain definition. To get a much better picture, we recommend you to check out some hybrid blockchain projects.


How universities should teach blockchain


The core issue is that blockchain is really hard to teach correctly. There’s no established curriculum, few textbooks exist, and the field is rife with misinformation, making it hard to know what is credible. Protocols are evolving at a rapid pace, and it’s tough to tell the difference between a white paper and reality. Having so much attention around blockchain specifically frames it as a miraculous and novel development rather than an outgrowth of decades of computer science research. Matt Blaze, an associate professor at the University of Pennsylvania and a cyber-security researcher, points out that the push for degree programs in blockchain is part of a trend of overspecialization by some engineering schools. The concepts sound good on paper but don’t live up to their promise. Despite the best of intentions, trends change, and students get stuck in narrow career paths. In order to avoid these pitfalls, universities will have to take an approach they’re not used to.


Experience an RDP attack? It’s your fault, not Microsoft’s

Windows security and protection [Windows logo/locks]
If you are compromised because of RDP, the problem is you or your organization. It isn’t a problem with Microsoft or RDP. You don’t need to put a VPN around RDP to protect it. You don’t need to change default network ports or some other black magic. Just use the default security settings or implement the myriad other security defenses you should have already been using. If you’re getting hacked because of RDP, you’re not doing a bunch of things that any good computer security defender should be doing. There are many ransomware programs, like SamSam, and cryptominers, like CrySis, that attempt brute-force guessing attacks against accessible RDP services. So many companies have had their RDP services compromised that the FBI and Department of Homeland Security (DHS) have issued warnings. The warning should be, “Your security sucks!” It isn’t like the malware programs are conducting a zero-day attack against some unpatched vulnerability.


Data as a Driver of Economic Efficiency

The General Data Protection Regulation (GDPR) became enforceable on May 25, 2018. The regulation aims to protect data by ‘design and default,’ whereby firms must handle data according to a set of principles. GDPR mandates opt-in consent for data collection and assigns substantial liability risks and penalties for data flow and data processing violations. GDPR’s enactment is particularly likely to influence technology ventures, given an increasing need for the use of data as a core product input. Specifically, data has become a key factor in technology-driven innovation and production, spanning industry sectors from pharmaceuticals and healthcare, to automative, smart infrastructure, and broader decision making. This report presents economic analyses of the consequences of data regulation and opt-in consent requirements for investment in new technology ventures, for consumer prices, and for economic welfare.


A Two-Minute Guide To Quantum Computing

AP Explains Quantum Computers
Most of us aren't clued up on the art of harnessing elementary particles like electrons and photons, so to understand how quantum computing works, meet Scottish startup M Squared. The company’s bread and butter is making some of the most accurate lasers in the world, using pure light and precise wavelengths. Such lasers can be used like a scalpel, one atom wide, to carve out the transistors of a silicon chip.  Typically the chip or brain in your smartphone is a centimeter square. It has a small section in the middle made up of around 300 million transistors, with connections spreading out like fingers to talk to the screen, the camera, the battery and more.  But imagine a chip with no transistors at all, and instead a small chamber that’s controlling the processes and energy levels inside of atoms. This is quantum computing, the next frontier of machines that think not in bytes but in powerful qubits. It sounds cutting-edge, but scientists have been studying the theory of quantum computing for 30 years, and some say the first mainstream applications are just around the corner.


How Do Self-Driving Cars See? (And How Do They See Me?)


We’ll start with radar, which rides behind the car’s sheet metal. It’s a technology that has been going into production cars for 20 years now, and it underpins familiar tech like adaptive cruise control and automatic emergency braking. ... The cameras—sometimes a dozen to a car and often used in stereo setups—are what let robocars see lane lines and road signs. They only see what the sun or your headlights illuminate, though, and they have the same trouble in bad weather that you do. But they’ve got terrific resolution, seeing in enough detail to recognize your arm sticking out to signal that left turn. ... If you spot something spinning, that’ll be the lidar. This gal builds a map of the world around the car by shooting out millions of light pulses every second and measuring how long they take to come back. It doesn’t match the resolution of a camera, but it should bounce enough of those infrared lasers off you to get a general sense of your shape. It works in just about every lighting condition and delivers data in the computer’s native tongue: numbers.



Facial recognition's failings: Coping with uncertainty in the age of machine learning

The shortcomings of publicly available facial-recognition systems were further highlighted in summer this year, when the American Civil Liberties Union (ACLU) tested the AWS Reckognition service. The test found that 28 members of the US Congress were falsely matched with mug shots from publicly available arrest photos. Professor Chris Bishop, director of Microsoft's Research Lab in Cambridge, said that as machine learning technologies were deployed in different real-world locales for the first time it was inevitable there would be complications. "When you apply something in the real world, the statistical distribution of the data probably isn't quite the same as you had in the laboratory," he said. "When you take data in the real world, point a camera down the street and so on, the lighting may be different, the environment may be different, so the performance can degrade for that reason. "When you're applying [these technologies] in the real world all these other things start to matter."


Robots Have a Diversity Problem


It is well-documented that A.I. programs of all stripes inherit the gender and racial biases of their creators on an algorithmic level, turning well-meaning machines into accidental agents of discrimination. But it turns out we also inflict our biases onto robots. A recent study led by Christoph Bartneck, a professor at the Human Interface Technology Lab at the University of Canterbury in New Zealand, found that not only are the majority of home robots designed with white plastic, but we also actually have a bias against the ones that are coated in black plastic. The findings were based on a shooter bias test, in which participants were asked to perceive threat level based on a split-second image of various black and white people, with robots thrown into the mix. Black robots that posed no threat were shot more than white ones. “The only thing that would motivate their bias [against the robots] would be that they would have transferred their already existing racial bias to, let’s say, African-Americans, onto the robots,” Bartneck told Medium. “That’s the only plausible explanation.”



Quote for the day:


"Remember this: Anticipation is the ultimate power. Losers react; leaders anticipate." -- Tony Robbins


Daily Tech Digest - November 10, 2018

How the Blockchain Could Break Big Tech’s Hold on A.I.

Unlike Google and Facebook, which store the data they get from users, the marketplaces built on Ocean Protocol will not have the data themselves; they will just be places for people with data to meet, ensuring that no central player can access or exploit the data. “Blockchains are incentive machines — you can get people to do stuff by paying them,” said Trent McConaghy, one of the founders of Ocean Protocol, who has been working in artificial intelligence since the 1990s. The goal, Mr. McConaghy said, is to “decentralize access to data before it’s too late.” Ocean is working with several automakers to collect data from cars to help create the artificial intelligence of autonomous cars. All the automakers are expected to share data so none of them have a monopoly over it. Another start-up, Revel, will pay people to collect the data that companies are looking for, like pictures of taxis or recordings of a particular language. Users can also let their phones and computers be used to process and categorize the images and sounds


Unit Testing – Abstracting Creation of Simple Values

When writing your unit tests you can use your chosen mocking framework to provide a fake implementation of IDateTimeProvider that provides a static value for _dateTimeProvider.Now. This is a very practical pattern for many situations, especially if the requirements of the provider become more complex. However, there are some notable disadvantages to this approach. Firstly, the actual requirements here are very simple, so it could be considered overkill to create an extra class for the provision of a single date time object. Especially if you consider that you’ll also need to configure dependency resolution if you’re using an IOC container and instantiate mock objects in your tests. Maybe creating a provider object is more effort than it’s worth. Secondly, as noted with method injection it is reasonable to suggest that the responsibility of choosing and applying a timestamp should be with the DocumentService itself.


Linux cryptocurrency miners are installing rootkits to hide themselves

korkerds-installation.jpg
Besides allowing KORKERDS to survive OS reboots, the rootkit component also contained code a slightly strange feature. Trend Micro says that KORKERDS' authors modified the rootkit to hide the cryptominer's main process from Linux's native process monitoring tools. "The rootkit hooks the readdir and readdir64 application programming interfaces (APIs) of the libc library," researchers said. "The rootkit will override the normal library file by replacing the normal readdir file with the rootkit's own version of readdir." This malicious version of readdir works by hiding processes named "kworkerds" --which in this case is the cryptominers' process. Linux process monitoring tools will still show 100 percent CPU usage, but admins won't be able to see (and kill) the kworkerds process causing the CPU resource consumption problems.


Why Is Data Science Different than Software Development?

Because the exact variables and metrics (and their potential transformations and enrichments) are not known beforehand, the data science development process must embrace an approach that supports rapid testing, failing, learning, wash and repeat. This attitude is reflected in the “Data Scientist Credo”: Data science is about identifying those variables and metrics that mightbe better predictors of performance; to codify relationships and patterns buried in the data in order to drive optimized actions and automation. Step 5 of the Data Scientist Development Methodology is where the real data science work begins – where the data scientist uses tools like TensorFlow, Caffe2, H20, Keras or SparkML to build analytic models – to codify cause-and-effect. This is true science, baby!! The data scientist will explore different analytic techniques and algorithms to try to create the most predictive models.


The New Cross-Platform Standard: Version 2.0


Microsoft has a new approach: Standard Class Library projects. A Standard Class Library consists of those APIs that "are intended to be available on all .NET implementations." The news here is that there is only one Standard and it supports all the .NET platforms -- no more profiles agglomerated into an arbitrary set of interfaces. The catch here is that the Standard may not include something you want ... at least, not yet. With PCLs there was always the possibility that, if you dropped one of the platforms you wanted to support, you might pick up the API you wanted. That's not an option with the Standard, which is monolithic. In some ways it's like setting the version of the .NET Framework you want to support in your project's properties: The lower the version you pick, the less functionality you have. Obviously, then, what matters in the .NET Standard is comprehensiveness. There have been several iterations of the .NET Standard specification, each of which includes more .NET Framework APIs.


Increasing value of personal data a 21st century challenge


“Something had to be done, and if it has achieved nothing else, the EU’s General Data Protection Regulation has focused people’s minds and got company executives and board members to take this issue seriously because now they have to be accountable and declare breaches,” he said. This means data protection in Europe, said Shamah, is no longer just the concern of technical teams in organisations, but also chief executives and shareholders. “In the light of the recent revelations about the misuse of data, everyone needs to consider what kind of digital footprint they want to leave; a permanent one like those left by the first astronauts on the surface of the moon or temporary like those left in the sand on a beach.” The aim, he said, should be for digital footprints that last only for as long as they are needed and then erased without a trace.


Why employees’ lapses in protecting data can sting organizations


In assessing the facts in the URMC case, it seems like attention focused on the departing/departed nurse practitioner asking for a patient list, which was provided in spreadsheet form. More often, when an employee leaves, there is a clear acknowledgment that the employee is cut off from all of the employer’s patient information because HIPAA does not allow continued access. The seemingly voluntary transmission offers a plausible basis for fining an entity when the ultimate bad act was on the part of the departed employee. As such, the takeaway from the URMC case is to not be overly generous, as misuse of information can come back to haunt the organization. Ensuring the privacy and security of patient information needs to be a paramount concern at all times. While it is impossible to control all the actions of employees, organizations can and must take reasonable and appropriate action to secure information as much as possible.


Why open source isn't just about code

We've seen things like Firefox really succeed where people come together from all over the world to build a product openly, and invite contributions. And we've seen that succeed and really take down a monopoly. And we've seen this work, time and time again, in more than just code, but in businesses, in government, in science. Where people, when they work openly, when they're inviting contributions, they're more innovative, they get better ideas. And they get more buy-in from the community who wants to use them. ... If it's open source you can hear more from the people that are using it. Places like Lego actually use that, if they're thinking about what Lego line to produce next, they have surveys and people can suggest things. And company-creating. You get better innovation when more people, and the right experts, are really working on the products. There's a lot of different advantages. Those are three of them that I can think of now.


Dutch Police Bust 'Cryptophone' Operation

Dutch police say they discovered the cryptophone operation while investigating an alleged money laundering operation. Police didn't just shut down the network. Instead, they seized a server and began monitoring the service. "We had sufficient evidence that these phones were used among criminals. We have succeeded in intercepting encrypted communication messages between these phones, decrypting them and having them live for some time," Dutch police said on Tuesday. "This has not only given us a unique insight into existing criminal networks; we have also been able to intercept drugs, weapons and money." Police say their investigation has already allowed them to bust a drugs lab in Enschede, Netherlands, and make 14 arrests, including a 46-year-old man from Lingewaard who's suspected of running the cryptophone company, as well as his alleged partner, a 52-year-old man from Boxtel.


Data Lake and Modern Data Architecture in Clinical Research and Healthcare

The primary challenge in implementing a data lake architecture in healthcare has to do with making sure the data platform is architected with data security, privacy and protection in mind while enabling real time data transmission, collection, ingestion and integration at scale. Not to mention, challenges in dealing with unstructured and binary data in the data lake cannot be underestimated. From the data lake architecture perspective, supporting both batch and near time data integration and business intelligence are a real practical challenge. Making integrated data available to all constituents in a self-service manner is another big challenge. ... Our enterprise data lake is a consumer to our MDM platform, which collects all master entities from all of our operational and transactional systems, and masters them in real time using sophisticated matching and merging algorithms, metadata management and semantic matching.



Quote for the day:


"Individual commitment to a group effort - that is what makes a team work, a company work, a society work, a civilization work." -- Vince Lombardi


Daily Tech Digest - November 09, 2018

Cisco Accidentally Released Dirty Cow Exploit Code in Software


“A failure in the final QA validation step of the automated software build system for the Cisco Expressway Series and Cisco TelePresence Video Communication Server (VCS) software inadvertently allowed a set of sample, dormant exploit code used internally by Cisco in validation scripts to be included in shipping software images,” the company said in an advisory. “This includes an exploit for the Dirty CoW vulnerability (CVE-2016-5195). The purpose of this QA validation step is to make sure the Cisco product contains the required fixes for this vulnerability.” Cisco said that it is not aware of “malicious use of the issue” and that the issue does not open the impacted software (Cisco Expressway Series and Cisco TelePresence Video Communication Server image versions X8.9 through X8.11.3) to any sort of attack. “The impacted software images will be removed and will be replaced by fixed images,” the company said. It did not specify when.



The Role of a Manager Has to Change in 5 Key Ways

“First, let’s fire all the managers” said Gary Hamel almost seven years ago in Harvard Business Review. “Think of the countless hours that team leaders, department heads, and vice presidents devote to supervising the work of others.” Today, we believe that the problem in most organizations isn’t simply that management is inefficient, it’s that the role and purpose of a “manager” haven’t kept pace with what’s needed. For almost 100 years, management has been associated with the five basic functions outlined by management theorist Henri Fayol: planning, organizing, staffing, directing, and controlling. These have become the default dimensions of a manager. But they relate to pursuing a fixed target in a stable landscape. Take away the stability of the landscape, and one needs to start thinking about the fluidity of the target. This is what’s happening today, and managers must move away from the friendly confines of these five tasks.


Cloud, edge, and fog computing: understanding the practical application for each

Cloud, edge and fog computing image
Fog computing effectively “decentralises” computational and analytical power. It sits between your local equipment and mobile devices — equipment with limited processing power and storage, in other words — and provides a way to sift through streams of information from these and other components in your IoT. You can get a better mental image of fog computing by thinking about driverless automobiles navigating a city block. If the vehicles, their sensors, and their controllers are the “edge layer” for a city’s smart transportation system — we’ll get to edge computing in a moment — then there are likely micro-data centres alongside mesh routers and cell towers that serve as the “fog layer.” Fog computing isn’t quite as decentralised as the edge, but it does further reduce the amount of data transferred across the network or upwards into the cloud layer. It facilitates communication and collaboration between the “nodes” in the edge layer. In the example above, the nodes are the driverless cars.


Don’t make your cloud migration a house of cards

Don̢۪t make your cloud migration a house of cards
The biggest architectural mistake that I see in the cloud involves coupling. Back in the day, applications were tightly coupled between other applications and data sources. If one thing stopped, the entire system stopped. So if the database went down, all connected applications did as well, including any systems that sent or received data from the database. Years ago, we learned that tight coupling was bad. It killed resiliency, scalability, and the ability to independently use resources such as applications, databases, and queues. Consultants like me gave presentations on it, and books were published on the topic, but IT organizations are still making the same architectural mistakes in 2018 that diminish the value of cloud computing. IT is not fixing things that are moving to the cloud that need fixing. At the core of the issue is money. Enterprises do not allocate enough funding to fix these issues before they move to the cloud. I assume the hope is that the issues won’t be noticed, or that the use of a more modern platform will magically fix the issues despite their poor architectures. 


deepfakes fake news tv head manipulation superimposed brainwashed
Seeing is believing, the old saw has it, but the truth is that believing is seeing: Human beings seek out information that supports what they want to believe and ignore the rest. Hacking that human tendency gives malicious actors a lot of power. We see this already with disinformation (so-called "fake news") that creates deliberate falsehoods that then spread under the guise of truth. By the time fact checkers start howling in protest, it's too late, and #PizzaGate is a thing. Deepfakes exploit this human tendency using generative adversarial networks (GANs), in which two machine learning (ML) models duke it out. One ML model trains on a data set and then creates video forgeries, while the other attempts to detect the forgeries. The forger creates fakes until the other ML model can't detect the forgery. The larger the set of training data, the easier it is for the forger to create a believable deepfake. This is why videos of former presidents and Hollywood celebrities have been frequently used in this early, first generation of deepfakes — there's a ton of publicly available video footage to train the forger.


The creation of one code base that is easy to maintain and publishes well across multiple OSes is no easy feat, said Jonathan Marston, director of software at Optimus Ride, a self-driving car company in Boston. Tools such as Adobe Air have tried and failed to achieve it, he said. "In the past, that dream has never lived up to the reality," Marston said. The ability to share code across multiple mobile OSes is getting more attainable with tools such as NativeScript and React Native, but the particular idiosyncrasies of each OS make it difficult to achieve complete code sharing, said Jesse Crossen, lead developer of VoiceThread, an education software company in Durham, N.C. For example, developers might want to write one set of code for an iOS visual component and another for an Android visual component, due to different screen sizes and resolutions. "You're always going to have that level of customization per platform or have [an app] that's a little bit generic," Crossen said.


While IoT is generally thought of in terms of consumer products, he pointed out that some IoT systems are widely used in the business context such as building management systems that control the heating, cooling, door locks and fire alarms. “It is important that businesses think about the IoT devices they have in their environments. The gap between IT and services often creates opportunities for technology to cause problems, and so there are some key questions businesses need to ask suppliers, retailers, hardware manufacturers so you know whether you are buying a good product or one full of security vulnerabilities.” Munro said he was able to buy a controller of a business management system online and was able to find vulnerabilities that could be exploited to discover the password of the embedded server that would enable an attacker to take complete control of the building management system.


Microsoft: .NET Core Is the Future, So Get Moving


"As we move forward into the future, with .NET Core 3, we're going to see some more workloads that we're going to be working on here, mainly Windows desktop," Massi said. "We're bringing Windows desktop workloads to .NET Core 3, as well as AI and IoT scenarios. "The big deal here is now that if you're a WinForms or WPF developer you can actually utilize the .NET Core runtime." It's still Windows, she said. It's still your Windows application framework for desktop apps, but developers will be able to take advantage of the .NET Core feature set, such as improved performance, side-by-side installs, language features and other innovations being made in the platform itself. "So that's kind of a big deal," Massi said. While .NET Core is about improved performance, self-contained .exe files for desktop application deployment flexibility and more, it also provides UI interop. "It's about, instead of totally rewriting your apps to take advantage of Windows 10 or more modern UI controls, we're making it so that you can use modern UI controls in WinForms and WPF -- that's what UI interop is," Massi said.



10 signs you may not be cut out for a systems analyst position

metamorworksistock-952679588.jpg
The ability to say "No" is important in managing all areas of life, but as a systems analyst, someday your job may depend on it. Suppose you're in a meeting with your boss, their boss, and management from the operations side. Someone tries to get you to commit, on the spot, to adding new functionality, and your boss is not interceding for you. Under pressure, many people would say "Yes" just to get out of the meeting. But if you don't know absolutely that you can do the project, within the time and budget required, resist the temptation to get them off your back temporarily. Agreeing to a task that turns out to be unreasonable is just a setup for failure. ... Saying "No" may prevent you from promising the impossible, but it's best to use the word sparingly. To succeed as a systems analyst, you'll need to think of yourself as an in-house consultant. The business needs IT tools to make money, and you have to figure out how to provide those tools. Work with your in-house customers to develop a plan you can say "Yes" to. Figure out what you need—more time, more money, more human or technical resources—and be prepared to back up your requests.


The security skills shortage: A golden opportunity for creative CISOs


The very shallow security skills talent pool has also led to another opportunity, one that serves to up-skill and empower in-house (and even outsourced) development teams. It is a known fact that most of the world’s highest-scale security breaches were made possible due to errors in the software code itself, and with the average breach costing in excess of US$3.6 million, it makes sense to examine the application security budget. It stands to reason that if developers remain untrained, the same mistakes will be made year after year, and the same reactive, expensive after-the-fact fixes will need to be applied. It seems a crazy way to burn through cash, all while an organization’s reputation as a security-conscious company goes down the drain. So, why not change it up and secure software from the start of production? Empowering development teams to write secure code is the golden opportunity for CISOs to seize proactive control over looming security issues, and where there is the chance for fast, easy and measurable improvements – for both security and development teams.



Quote for the day:


"Perhaps the ultimate test of a leader is not what you are able to do in the here and now - but instead what continues to grow long after you're gone" -- Tom Rath


Daily Tech Digest - November 08, 2018

Each private cubicle sits on short legs, enabling small warehouse robots to scuttle around underneath them. Then, the robots can pick up the cubes and move them around the office based on what each person and team needs for the day. For instance, if you have a day of heads-down work, you’d get assigned a private cubicle so you can focus. If you have a day full of meetings, and you don’t need private space, your cube combines with other cubes to create a larger space in which to work with your colleagues. The robots shift the office in real time to make this happen. ... For now, the idea seems farfetched, but Rapt’s design principal and CEO David Galullo believes it’s closer than you might think. He says the studio is working with clients who are interested in how a workplace can be reconfigured over a weekend to respond to a team’s changing needs. The key is to keep the office as spare as possible, so you can easily move things around, which he believes is one reason that many companies prefer an open plan


HSBC Bank Alerts US Customers to Data Breach
An HSBC spokeswoman tells Information Security Media Group that less than 1 percent of HSBC's U.S. customers were affected by the data breach. The bank declined to quantify how many U.S. customers it has. But The Telegraph reports that HSBC manages about 1.4 million U.S. accounts, meaning 14,000 customers may have been affected. "HSBC regrets this incident, and we take our responsibility for protecting our customers very seriously," the bank says in a statement sent to ISMG. "We responded to this incident by fortifying our log-on and authentication processes, and implemented additional layers of security for digital and mobile access to all personal and business banking accounts," the statement notes. "We have notified those customers whose accounts may have experienced unauthorized access and are offering them one year of credit monitoring and identify theft protection service." HSBC's data breach notification to victims also notes: "You may have received a call or email from us so we could help you change your online banking credentials and access your account."


What Makes SSDs Different?

An SSD in a laptop will often go for long periods of time without any IO. It has plenty of time to perform garbage collection and similar functions. An enterprise SSD however, may face a full-time 24×7 workload and never have idle time for garbage collection type of functions but in the enterprise, it is consistent performance which is more important than peak levels of performance. Enterprises need SSD suppliers to create drives that focus more on the consistent delivery of IO (or IOPS) all the time no matter how heavy the workload rather than peak levels of performance that look good on a marketing datasheet. The key challenge to delivering consistent performance is how the SSD handles write IO, especially under heavy random workloads. With each write, flash media needs to find available space to place that write. If there is no space available it has to make space “on the fly,” by rearranging data within cells to create contiguous space for the new write. Garbage collection routines are supposed to make this space available in advance, but they are not always afforded the time to complete their tasks.


Java and MongoDB 4.0 Support for Multi-Document ACID Transactions

MongoDB 4.0 adds support for multi-document ACID transactions. But wait... Does that mean MongoDB did not support transactions until now? No, actually MongoDB has always supported transactions in the form of single document transactions. MongoDB 4.0 extends these transactional guarantees across multiple documents, multiple statements, multiple collections, and multiple databases. What good would a database be without any form of transactional data integrity guarantee? ... Multi-document ACID transactions in MongoDB are very similar to what you probably already know from traditional relational databases. MongoDB’s transactions are a conversational set of related operations that must atomically commit or fully rollback with all-or-nothing execution. Transactions are used to make sure operations are atomic even across multiple collections or databases. Thus, with snapshot isolation reads, another user can only see all the operations or none of them.


Seth James Nielson on Blockchain Technology for Data Governance


Data security and data management are much more complicated. Every member of the Blockchain must preserve and protect a private key. If that key is ever compromised by an unauthorized party, there is little that can be done to revoke the compromised key. Perhaps just as bad, if the key is lost (e.g., accidentally deleted), that user's access to the system is permanently lost as well. It is estimated, for example that 20% of all the Bitcoins in the world are lost in this manner. Finally, by itself, Blockchain doesn't really offer much for data management. Rather, it enables new forms of data management. Supply chain is a great example where Blockchain appears to be having some great success. When you look at world-wide, complicated supply chains, keeping track of data between hundreds, or even thousands, of inter-operating vendors is extremely challenging. Creating a Blockchain for these participants to create data of their own, and track related data of others, is a fantastic fit.


Strange snafu misroutes domestic US Internet traffic through China Telecom

Strange snafu misroutes domestic US Internet traffic through China Telecom
The sustained misdirection further underscores the fragility of BGP, which forms the underpinning of the Internet's global routing system. In April, unknown attackers used BGP hijacking to redirect traffic destined for Amazon’s Route 53 domain-resolution service. The two-hour event allowed the attackers to steal about $150,000 in digital coins as unwitting people were routed to a fake MyEtherWallet.com site rather than the authentic wallet service that got called normally. When end users clicked through a message warning of a self-signed certificate, the fake site drained their digital wallets. ... “While one may argue such attacks can always be explained by ‘normal’ BGP behavior, these, in particular, suggest malicious intent, precisely because of their unusual transit characteristics—namely the lengthened routes and the abnormal durations,” the authors wrote. The Canada to South Korea leak, the report said, lasted for about six months and started in February 2016.


Powerful $39 Raspberry Pi clone: Rock Pi 4

rockpi4banglesd.png
As mentioned, the processor is relatively capable for the price, with a dual-core 2.0GHz Arm Cortex-A72 paired with a quad-core 1.5GHz Arm Cortex-A53 in a Big.LITTLE configuration, which swaps tasks between cores for power efficiency. Smooth 4K video playback should be possible courtesy of the HDMI 2.0 port and Mali-T864 GPU. Fast SSD storage is also an option, via an M.2 interface supporting up to a 2TB NVMe SSD, and if the onboard SD card storage is too slow, there's an option to add up to 128GB eMMC storage to the board. Though the memory is relatively fast — 64-bit, dual-channel 3,200Mb/s LPDDR4 — only 1GB is available on the base $39 model, ranging up to 4GB for $65. There's a decent selection of ports, with four USB Type-A ports, one USB 3.0 host, one USB 3.0 OTG, and two USB 2.0 host. For those interested in building their own homemade electronics, there's also a 40-pin expansion header for connecting to boards, sensors and other hardware. Though this header's pin layout is similar to that of the Pi, the Rock Pi's maker said it wasn't possible to make it "100% GPIO compatible".


Banks in the changing world of financial intermediation

Banks in the changing world of financial intermediation
The dual forces of technological (and data) innovation and shifts in the regulatory and broader sociopolitical environment are opening great swaths of this financial-intermediation system to new entrants, including other large financial institutions, specialist-finance providers, and technology firms. This opening has not had a one-sided impact nor does it spell disaster for banks. Where will these changes lead? Our view is that the current complex and interlocking system of financial intermediation will be streamlined by the forces of technology and regulation into a simpler system with three layers.  ... Our view of a streamlined system of financial intermediation, it should be noted, is an “insider’s” perspective: we do not believe that customers or clients will really take note of this underlying structural change. The burning question, of course, is what these changes mean for banks.


The Growing Significance Of DevOps For Data Science


New datasets result in training and evolving new ML models that need to be made available to the users. Some of the best practices of continuous integration and deployment (CI/CD) are applied to ML lifecycle management. Each version of an ML model is packaged as a container image that is tagged differently. DevOps teams bridge the gap between the ML training environment and model deployment environment through sophisticated CI/CD pipelines. When a fully-trained ML model is available, DevOps teams are expected to host the model in a scalable environment. ... The rise of containers and container management tools make ML development manageable and efficient. DevOps teams are leveraging containers for provisioning development environments, data processing pipelines, training infrastructure and model deployment environments. Emerging technologies such as Kubeflow and MlFlow focus on enabling DevOps teams to tackle the new challenges involved in dealing with ML infrastructure.


Legacy Apps - Dealing with IFRAME Mess (Window.postMessage)

In the old days, iframes were used a lot. Not only for embedding content from other sites, cross domain Ajax or hacking an overlay that covered selects but also to provide boundaries between page zones or mimic desktop-like windows layout… Window.postMessage method was introduced into browsers to enable safe cross-origin communication between Window objects. The method can be used to pass data between iframes. In this post, I’m assuming that the application with iframes is old but it can be run in Internet Explorer 11, which is the last version that Microsoft released (in 2013). From what I’ve seen, it’s often the case that Internet Explorer has to be supported but at least it’s the latest version of it. ... Thanks to postMessage method, it’s very easy to create a mini message bus so events triggered in one iframecan be handled in another if the target iframe chooses to take an action. Such approach reduces couplingbetween iframes as one frame doesn't need to know any details about elements of the other



Quote for the day:


"If you only read the books that everyone else is reading, you can only think what everyone else is thinking" -- Haruki Murakami


Daily Tech Digest - November 07, 2018

Accountancy and technology: the changing role of the accountant

Accountancy and technology: the changing role of the accountant
The change is probably less in classic, financial accounting but more on the side of financial analysis and managerial accounting. It will be shifting from getting the numbers out of the system in an error-free way into PowerPoint into really doing something meaningful with these numbers, becoming a business partner and advising the counterparts in the business. So that may be understanding drivers, reviewing trends, and coming to conclusions. Also there’s the interpersonal skills. It’s about not just working with the numbers but working with the people on the business side. ... Like many disruptive changes, it’s starting now and it will take its time to fully come to fruition. There is a learning curve that the industry will have to go through. It will take some time, we will find that some problems will lend themselves better to the algorithms we have today, and the algorithms are getting better all the time.


Event Sourcing to the Cloud at HomeAway

Event sourcing allows services to separate their read and write concern and truly allows services to encapsulate data. Having full encapsulation not only prevents a death star architecture, but reduces integration cost for each microservice. One of the biggest advantages of an event sourcing architecture is data democratization. Having data in the center of the architecture allows services to easily discover and subscribe, which is essential for developer velocity and implementing near real time experiences. Event sourcing also opens the door for pattern based programming. If the pattern and libraries are set in place, the goal should be to have an entry level engineer execute the development lifecycle with very little ramp up time, or training. Event sourcing provides a great audit trail as the entirety of history is persisted, which makes auditing and visualizing what happened very easy. I think this is a very critical aspect as services become more asynchronous, as customers need real time updates or feedback about the state of their transaction.


Cybersecurity, AI skills to dominate IT staff hires in 2019

While large, enterprise firms will focus on cybersecurity and AI, small to midsize firms are more likely to seek new employees with DevOps skills, end-user hardware experience, and proficiency in IT infrastructure. Enterprise staff reported that keeping infrastructure up-to-date and implementing new, innovative solutions -- such as AI, and Internet-of-Things (IoT) technologies are some of the biggest challenges in IT faced today by organizations. Smaller companies, however, are faced with the problem of convincing boards of the importance of implementing IT projects and how to adhere to acceptable security practices and standards. The report includes responses from 1,000 IT professionals. When asked about their own prospects, 26 percent of respondents said they planned to find a new role; eight percent plan to leave the field entirely, six percent hope to transition into IT consultancy, and five percent are on the way to retirement.


Despite Fraud Awareness, Password Reuse Persists for Half of U.S. Consumers


As National Fraud Day approaches (Nov. 11), it remains clear that more consumer education is required when it comes to thwarting scammers and identity thieves. Despite almost half of U.S. consumers (49 percent) believing their security habits make them vulnerable to information fraud or identity theft, 51 percent admit to reusing passwords/PINs across multiple accounts such as email, computer log in, phone passcode and bank accounts. ,,, The good news is, more than nine in 10 (91 percent) Baby Boomers closely monitor their financial account activity such as bank statements, credit reports and credit card statements each week, compared to Millennials (85 percent) and Gen Zs (86 percent). Even so, nearly three in 10 of polled consumers (27 percent) said that they don’t know how to find out if they’ve become a victim; and one in five consumers (20 percent) admit that if they became a victim of fraud, they wouldn’t necessarily know how to report it.


5 best practices for third-party data risk management

Recent events leading to overshared data, breached data, operational failures and other incidents have prompted many businesses to re-evaluate how they approach third-party risk management (TPRM) as many of these situations were attributed to a third party. As such, boards of directors and their C-suite teams understand the critical need to be more focused and informed about their third parties, related risk management activities and key decisions, especially for those third parties deemed critical to the organization. EY recently conducted its sixth annual global financial services third-party risk management survey. In a nutshell, it shows that many companies are continuing to make upgrades to the governance and oversight of this function. Yet, it’s clear that formidable challenges remain. To help businesses stay ahead of the curve, outlined below are five leading practices in third-party risk management from which organizations can benefit


In the Age of A.I., Is Seeing Still Believing?


“Prediction is really the hallmark of intelligence,” Efros said, “and we are constantly predicting and hallucinating things that are not actually visible.” In a sense, synthesizing is simply imagining. The apparent paradox of Farid’s license-plate research—that unreal images can help us read real ones—just reflects how thinking works. In this respect, deepfakes were sparks thrown off by the project of building A.I. ... A world saturated with synthesis, I’d begun to think, would evoke contradictory feelings. During my time at Berkeley, the images and videos I saw had come to seem distant and remote, like objects behind glass. Their clarity and perfection looked artificial (as did their gritty realism, when they had it). But I’d also begun to feel, more acutely than usual, the permeability of my own mind. I thought of a famous study in which people saw doctored photographs of themselves. As children, they appeared to be standing in the basket of a hot-air balloon.


Breach Settlement Has Unusual Penalty

This case is noteworthy for several reasons, including the state attorney choosing to take action against both the covered entity and business associate involved with the breach, but also for the enforcement action against the BA's owner. "The attorney general of New Jersey has an array of penalties and relief to enforce the state's Consumer Fraud Act, including fines and suspension or revocation of authority against a company or individual to do business in the state," says privacy attorney David Holtzman, vice president of security compliance at the consultancy CynergisTek. "While it is not uncommon for a negotiated settlement agreement to include a period of exclusion for a company or its officers, this the first time I am aware of the New Jersey attorney general applying this in relation to an investigation regarding unauthorized disclosure of health information. " There have been a handful of similar actions by state and federal regulators in other cases involving data security, he notes.


How to move beyond REST for microservices communication


From a design and architecture perspective, request synchronicity breaks a fundamental part of good microservice design: autonomy. It's presumed that, when synchronous calls block a microservice, it is no longer an open resource. However, when that presumption is untrue, it can lead to confusion and instability. It's possible to make REST semisynchronous through methods such as HTML polling. The server-push features of HTTP/2 also alleviate issues around binary payloads and can multiplex requests on a single port. Microservices developers who want to keep the HTTP model can settle on HTTP/2, but there are still other options. In asynchronous microservices communication, a message is sent to one microservice, and it moves along until it requires a response. That response may come in the form of an event or a callback. Asynchronous microservices make connections through some form of a service or message bus.


Sending WhatsApp Messages from a Win32 C++ Program

This article is the second part, following the first part. In this part, I will explain how to send images and documents to a group. As mentioned in part 1, there are several service providers and we have chosen one of them (WhatsAppMate) and started a free trial. However, their code samples for using their services are in almost any programming language except for C++. So we wrote our own C++ class for that purpose. Sending documents and files is a bit more complicated and will be explained in this article. WhatsApp is a multi-platform free service for chatting via video or voice and for sending messages to individuals or groups, including files, media, etc. WhatsApp is better than the old SMS because it is free and has more features. During our day to day work, we need to set up all sort of alerts to be sent to a group who share a job or work on a feature and when that's done automatically, it makes life easier to be notified.


Decoupling in Cloud Era: Building Cloud Native Microservices with Spring Cloud Azure


Cloud native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach. ... A cloud native application is specifically designed for a cloud computing environment as opposed to simply being migrated to the cloud. ... Microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies.




Quote for the day:


"A leadership disposition guides you to take the path of most resistance and turn it into the path of least resistance." -- Dov Seidman