Daily Tech Digest - May 17, 2018

7 Basic Rules for Button Design


Every item in a design requires effort by the user to decode. Generally, the more time needed for users to decode the UI the less usable it becomes for them. But how do users understand whether a certain element is interactive or not? They use previous experience and visual signifiers to clarify the meaning of the UI object. That’s why it so important to use appropriate visual signifiers (such as size, shape, color, shadow, etc.) to make the element look like a button. Visual signifiers hold an essential information value — they help to create affordances in the interface. Unfortunately, in many interfaces the signifiers of interactivity are weak and require interaction effort; as a result, they effectively reduce discoverability. If clear affordances of interaction are missing and users struggle with what is “clickable” and what is not, it won’t matter how cool we make the design. If they find it hard to use, they will find it frustrating and ultimately not very usable. Weak signifiers is an even more significant problem for mobile users. In the attempt to understand whether an individual element is interactive or not, desktop users can move the cursor on the element and check whether the cursor changes its state. Mobile users don’t have such opportunity.



Serverless deployment lifts enterprise DevOps velocity


At a tipping point of serverless expertise, enterprises will start to put existing applications in serverless architectures as well. Significant challenges remain when converting existing apps to serverless, but some mainstream companies have already started that journey. Smart Parking Ltd., a car parking optimization software maker based in Australia, moved from its own data centers in Australia, New Zealand and the U.K. to AWS cloud infrastructure 18 months ago. Its next step is to move to an updated cloud infrastructure based on Google Cloud Platform, which includes Google Cloud Functions serverless technology, by June 2018. "As a small company, if we just stayed with classical servers hosted in the cloud, we were doing the same things the same way, hoping for a different outcome, and that's not realistic," said John Heard, CTO at Smart Parking. "What Google is solving are the big questions around how you change your focus from writing lots of code to writing small pieces of code that focus on the value of a piece of information, and that's what Cloud Functions are all about," he added.


3 reasons why hiring older tech pros is a smart decision

istock-869391194.jpg
For software engineers in particular, experience counts a lot, Matloff said. "The more experienced engineers are far better able to look down the road, and see the consequences of a candidate code design," he added. "Thus they produce code that is faster, less bug-prone, and more extendible." And in data science, recent graduates may know a number of techniques, but often lack the ability to use them effectively in the real world, Matloff said. "Practical intuition is crucial for effective predictive modeling," he added. Older tech workers also typically have more experience in terms of management and business strategy, Mitzner said. Not only can they offer those skills to the company, they can also act as mentors to younger professionals and pass on their knowledge, she added. "Most people who have been successful in their career would say that they had a great mentor," Mitzner said. "If you have a business that's all 20s to 30s, you could be really missing out on that." Many older employees also appreciate the same flexibility that younger workers do, as they balance work and home life with aging parents and children reaching adulthood, said Sarah Gibson, a consultant with expertise on changing generations in the workforce.


Computes DOS: Decentralized Operating System

“Computes is more like a decentralized operating system than a mesh computer,” replied one of our most active developer partners yesterday. He went on to explain that Computes has all of the components of a traditional computer but designed for decentralized computing. The more I think about it — the more profound his analogy is. We typically describe Computes as a decentralized peer-to-peer mesh supercomputing platform optimized for running AI algorithms near realtime data and IoT data streams. Every machine running the Computes nanocore agent can connect, communicate, and compute together as if they were physical cores within a single software-defined supercomputer. In light of yesterday’s discussion, I believe that we may be selling ourselves short on Computes’ overall capabilities. While Computes is uniquely well positioned for enterprise edge and high performance computing, most of our beta developers seem to be building next generation decentralized apps (Dapps) on top of our platform.


GDPR impact on Whois data raising concern


Cyber criminals typically register a few hundred, even thousands, of domains for their activities, and even if fake details are used, registrants have to use a real phone number and email address, which is enough for the security community to link associated domains. Using high-speed machine-to-machine technology and with full access to Whois data, Barlow said organisations such as IBM were able to block millions of spam messages or delay activity coming from domains associated with the individuals linked to spam messages. While the GDPR is designed to enhance the privacy of individuals, it is having the unintended effect of encouraging domain registrars not to submit registration details to the RDS, which means the information is incomplete and of less value to cyber crime fighters. Without access to Whois data, IBM X-Force analysts predict it might take more than 30 days to detect malicious domains by other methods, leaving organisations at the mercy of cyber criminals during that period.


Brush up on microservices architecture best practices


Isolating and debugging performance problems is inherently harder in microservice-based applications because of their more complex architecture. Therefore, productively managing microservices' performance calls for having a full-fledged troubleshooting plan. In this follow-up article, Kurt Marko elaborated on what goes into successful performance analysis. Effective examples of the practice will incorporate data pertaining to metrics, logs and external events. To then make the most use of tools like Loggly, Splunk or Sumo Logic, aggregate all of this information into one unified data pool. You might also consider a tool that uses the open source ELK Stack. Elasticsearch has the potential to greatly assist troubleshooters in identifying and correlating events, especially in cases where log files don't display the pertinent details chronologically. The techniques and automation tools used for conventional monolithic applications aren't necessarily well-suited to isolate and solve microservices' performance problems.


More Attention Needs to be on Cyber Crime, Not Cyber Espionage

Cyber crime remains a global problem that continues to be innovative and all-encompassing. What’s more, cyber crime doesn’t focus solely on organizations but also on individuals. The statistics demonstrates the magnitude of the cyber crime onslaught. According to a 2017 report by one company, damages incurred by cyber crime is expected to reach USD $6 trillion by 20201. Conversely, cyber security investment is only expected to reach USD $1 trillion by 2021, according to the same source. Furthermore, data breaches continue to afflict individuals. During the first half of 2017, more than 2 billion records were victim of cyber theft, whereas “only” 721 million records were lost during the last half of 2016, a 164 percent increase. According to another reputable source, among the three major classifications of breaches impacting people were identity theft (69 percent), access to financial data (15 percent), and access to accounts (7 percent). With cyber crime communities existing all over the world, these groups and individuals offer professional business goods and services based on quality and reputation that serves to quickly weed out inferior performers, innovation and dependability are instrumental to success.


Data integrity and confidentiality are 'pillars' of cybersecurity


There are two pillars of information security: data integrity and confidentiality. Let's take a simple example: your checking account. Integrity means the number. When you go to an ATM or online or to a teller and check your balance, that number should be easily agreed upon by you and your bank. There should be a clear ledger showing who put money in, when and how much, and who took money out, when and how much. There shouldn't be any randomness; there shouldn't be people putting money in or taking money out without your knowledge or your permission. So, one pillar is making sure the integrity of information -- the code you're running, the executables of your applications -- should be the same ones the developer wrote. Just like the numbers in your bank account, the code you're running should not be tampered with. Then, there's confidentiality. You and your bank should be the only ones who know the numbers in your bank account. When you take confidentiality away from your checking account, it's the same problems when you apply that to your applications and infrastructure.


This new type of DDoS attack takes advantage of an old vulnerability

"Just like the much-discussed case of easily exploitable IoT devices, most UPnP device vendors prefer focusing on compliance with the protocol and easy delivery, rather than security," Avishay Zawoznik, security research team leader at Imperva, told ZDNet. "Many vendors reuse open UPnP server implementations for their devices, not bothering to modify them for a better security performance." Examples of problems with the protocol go all the way back to 2001, but the simplicity of using it means it is still widely deployed. However, Imperva researchers claim the discovery of how it can be used to make DDoS attacks more difficult to attack could mean widespread problems. "We have discovered a new DDoS attack technique, which uses known vulnerabilities, and has the potential to put any company with an online presence at risk of attack," said Zawoznik. Researchers first noticed something was new during a Simple Service Discovery Protocol (SSDP) attack in April. 


This article focuses on the Module System and Reactive Streams; you can find an in-depth description of JShell here, and of the Stack Walking API here. Naturally, Java 9 also introduced some other APIs, as well as improvements related to internal implementations of the JDK; you can follow this link for the entire list of Java 9 characteristics. The Java Platform Module System (JPMS) – the result of Project Jigsaw – is the defining feature of Java 9. Simply put, it organizes packages and types in a way that is much easier to manage and maintain. In this section, we’ll first go over the driving forces behind JPMS, then walk you through the declaration of a module. Finally, you’ll have a simple application illustrating the module system. ... Module descriptors are the key to the module system. A descriptor is the compiled version of a module declaration – specified in a file named module-info.java at the root of the module’s directory hierarchy. A module declaration starts with the module keyword, followed by the name of the module. The declaration ends with a pair of curly braces wrapping around zero or more module directives.



Quote for the day:


"Added pressure and responsibility should not change one's leadership style, it should merely expose that which already exists." -- Mark W. Boyer


No comments:

Post a Comment