Daily Tech Digest - May 20, 2020

How IT and Security Leaders Are Addressing the Current Social & Economic Landscape


Despite the security and overall organizational preparedness concerns, IT and security leaders share some notes of encouragement. The majority (68%) of IT leaders agree that their technology infrastructure was prepared to adequately address employees working from home. On an even brighter note, 81% of security leaders believe that their existing security infrastructure can adequately address the current working from home demands, and 67% feel that their security infrastructure is fully prepared to handle the range of risks associated. As more and more individuals are getting their jobs done from home, 71% of IT leaders say that the current situation has created a more positive view of remote workplace policies and will likely impact how they plan for office space, tech staffing and overall staffing in the future. In order to address the new work environment due to COVID-19, 44% of IT leaders will need to acquire new technology solutions and services.



Hackers Hit Food Supply Company

DarkOwl said its analysis shows the attackers have managed to steal some 2,600 files from Sherwood. The stolen data includes cash-flow analysis, distributor data, business insurance content, and vendor information. Included in the dataset are scanned images of driver's licenses of people in Sherwood's distribution network. The threat actors posted screen shots of a chat they had with Coveware, a ransomware mitigation firm that Sherwood had hired to help deal with the crisis. The conversation shows that Sherwood has been dealing with the attack since at least May 3rd , according to DarkOwl's research. The screenshots also suggest that Sherwood at one point was willing to pay $4.25 million and later $7.5 million to get its data back. In an emailed statement, a Sherwood spokeswoman said the company does not comment on active criminal investigations. ... According to DarkOwl, on Monday the attackers updated Happy Blog with news of their plan to next auction off personal data belonging to Madonna.


5 Ways to Detect Application Security Vulnerabilities Sooner to Reduce Costs and Risk

appsec
Human error is always a security concern, especially when it comes to credentials. Just consider how many times you’ve heard of developers committing code only to later realize they’d accidentally included a password. These errors can lead to high-cost consequences for organizations. There are many tools that scan for secrets and credentials that can be accidentally committed to a source code repository. One example is Microsoft Credential Scanner (CredScan). Perform this scan in the PR/CI build to identify the issue as soon as it happens so they can be changed before this becomes a problem. Once an application is deployed, you can continue to scan for vulnerabilities through the following automated continuous delivery pipeline capabilities. Unlike SAST, which looks for potential security vulnerabilities by examining an application from the inside—at the source code—Dynamic Application Security Testing (DAST) looks at the application while it is running to identify any potential vulnerabilities that a hacker could exploit.


MySQL DB
For me, it is that asynchronous programming is such a paradigm shift in a system architecture that it should be analyzed very differently from a “synchronous” system. We analyzed response times but never thought how many concurrent requests there would be at any point because, in a synchronous system, the calling system is itself limited in how many concurrent calls it can generate, because of threads getting blocked for every request. This is not true for asynchronous systems, and hence a different mental model is required to understand causes and outcomes. Any large software system (especially in the current environment of dependent microservices) is essentially a data flow pipeline and any attempt to scale which does not expand the most bottlenecked part of the pipeline is useless in increasing data flow. We thought of pushing a huge amount of data through our pipeline by making Armor alone asynchronous and failed to distinguish between a matter of Speed (doing this faster) from a matter of Volume (doing a lot of it at the same time).


The downside of resilient leadership


Where does resilience come from? It’s a muscle that can be developed early on through a strong family life or a mentor relationship, or from positive experiences that help ready children and young adults for life’s tests in later years. But resilience is often also forged at young ages through adverse experiences that force children to rely on what psychologists call an “internal locus of control,” a concept developed in the 1950s by American psychologist Julian Rotter. When challenged, these young people decide that they are going to be in charge of their own fate and not let their circumstances define them. ... One of the messages these future leaders told themselves, or that was hammered into them by a parent, was “don’t be a victim.” Nobody would wish tough circumstances on another person, and yet it was in the moments of being tested that they discovered what they were made of. Adversity built a quiet confidence in them, because they went through tough times and knew they could do it again.


Why the cloud journey is hard

Cloud-journey
Cloud journey- Conway’s Law states: “The structure of any system designed by an organisation is isomorphic to the structure of the organisation,” which means software or automated systems end up shaped like the organisational structure they’re designed in or designed for, according to Wikipedia. This could be why some organisations find it difficult to fully embrace cloud adoption as certain legacy organisational structures just don’t fit into a more demanding agile oriented cloud environment. Nico Coetzee, Enterprise Architect for Cloud Adoption and Modern IT Architecture at Ovations, elaborates: “Every company that embarks on its cloud journey can count on some deliverables not going as planned. There are many reasons for the failure of certain modernisation projects and cloud journeys, but it might come as a surprise to hear that the most common reason could be as simple as traditional structures.” If we go back to Melvin E Conway’s research on ‘How do committees invent?’ from 1967, there are some key insights.


Executive AI Fluency – Ending the Cycle of Failed AI Proof-of-Concept Projects

Executive AI Fluency
Executives cannot understand AI in a purely conceptual fashion. They need practical use-cases for the types of AI projects they are brainstorming – and it is even better (at least initially) to have examples within their industry or related industries. One example of a strong AI use-case in banking is fraud detection. Some banks and AI vendors report to have lowered their rate of false-positive results for financial fraud using predictive analytics solutions. A wide range of use cases allows leadership to better detect where AI opportunities might lie within the company and decide which projects deserve the most attention of the many that could be applied. Banking leaders should be able to expect a chatbot solution to provide their customers basic answers to common and simple questions. Bank leadership should not expect their chatbot to be able to handle complex conversations, or draw upon rich context from previous email or phone conversations with the client. The technology is simply not at that level today. In this way working with AI is more strategic than the “plug and play” nature of IT solutions.


US Treasury Warning: Beware of COVID-19 Financial Fraud

US Treasury Warning: Beware of COVID-19 Financial Fraud
FinCEN notes that medical-related fraud scams, including fake cures, tests, vaccines and services, may require customers to pay via a pre-paid card instead of a credit card; require the use of a money services business or convertible virtual currency; or require that the buyer send funds via an electronic funds transfer to a high-risk jurisdiction. The agency notes that scams involving nondelivery of medical-related goods often occur through websites, robocalls or on the darknet. Scams involving price gouging include cases where individuals have been selling surplus items or newly acquired bulk shipments of goods - such as masks, disposable gloves, isopropyl alcohol, disinfectants, hand sanitizers, toilet paper and other paper products - at inflated prices, FinCEN explains. "Payment methods vary by scheme and can include the use of pre-paid cards, money services businesses, credit card transactions, wire transactions, or electronic fund transfers," it notes. ... "FinCEN is correct in its assertion that there will be a huge increase in all types of cybercrimes, especially related to medical scams and related cyberattacks, says former FBI agent Jason G. Weiss


How the UK pensions industry is paving the way for open data sharing ecosystems

The UK pensions industry and the rise of open data sharing ecosystems image
While some questions remain over how the regulatory standards from the pensions dashboard and Open Banking (a separate regulation focused on building transparency and open sharing into the banking industry) can be applied to a wider Open Finance initiative, the pension dashboard’s architecture — federated digital identity, UMA, and interoperability through secure Open APIs — provides a viable model for Open Finance. Crucially, these technologies conform to open standards, meaning the architecture that underpins them can be updated and synced with any new technology, preventing the formation of any legacy systems and allowing for consistent innovation. When adopted across the financial services ecosystem, they would create a variety of secure, trustworthy, and user-friendly tools that would empower users to engage more meaningfully with their finances. Picture it: financial advisors and brokers could deliver important financial advice more completely, immediately, and visibly through the kind of seamless user experiences that are currently the preserve of digital native sectors.


NCSC discloses multiple vulnerabilities in contact-tracing app

The encryption vulnerability in the beta app has arisen because the app does not encrypt proximity contact event data, and the data is not independently encrypted before it is sent to the central servers. This, said Levy, means that when data is transferred to the back-end, it is only protected by the transport layer security (TLS) protocol, so that if Cloudflare was compromised in some way, cyber criminals could access that data. He pointed out that this was something else that was sacrificed at first because of the need for speed. Finally, Levy noted some ambiguities and errors in statements made about the beta app. Among these was a statement that “the infrastructure provider and the healthcare service can be assumed to be the same entity”. This suggests that the NCSC trusts the network bridging the gap between user devices and the central NHS servers in the same way as it trusts the whole of the NHS, which is clearly not the case.



Quote for the day:


"You must learn to rule. It's something none of your ancestors learned." -- Frank Herbert


Daily Tech Digest - May 19, 2020

CEOs, CISOs fear becoming the next big breach target


The global survey of 200 CEO and CISO respondents was conducted in industries including healthcare, finance, and retail, and uncovered prominent cybersecurity stressors and areas of disconnect for business and security leaders, Forcepoint said. They include a lack of an ongoing cybersecurity strategy for less than half of all CEO respondents. The research also identified disparities between geographic regions on data protection as well as a digital transformation dichotomy battle between increased risk and increased technology capability. The disparity is compounded by a belief that senior leadership is cyber-aware and data-literate (89%) and focused on cybersecurity as a top organizational priority (93%), according to the report. Meanwhile, cybersecurity strategies are seen by 85% of executives as a major driver for digital transformation, yet 66% recognize the increased organizational exposure to cyber threats because of digitization, the Forcepoint report said. Only 46% of leaders regularly review their cybersecurity strategies, according to the report.



Interview With Node.js Technical Steering Committee Chair

The major challenge was that Node.js already had a well established module system and that ESM was different in many important ways. Things like asynchronous loading versus synchronous loading leads to the potential for a lot of subtle interoperability problems. Unfortunately when the ESM spec was being put together the Node.js project was not very active in that process (or other standards either!). The result was some areas of conflict between the existing module system and long standing community expectations/usage and the spec as a reflection of what was a good fit for browsers. The modules team has done a good job of working through a large number of edge cases and finding approaches (and getting agreement for them which can be hard) that allow for reasonable interoperability while working to maintain compatibility with the spec. ... In terms of larger features, the Node.js project does not have a formal roadmap so “What’s” next is often “What’s ready” when the next release is being cut. We do however, have longer term plans and initiatives.


IT Spending Forecast: Unfortunately, It's Going to Hurt

Image: Maridav - stock.adobe.com
Businesses' response to the pandemic will continue to spur spending in technology areas that support working from home, such as public cloud services, now expected to grow by 19% in 2020. Cloud-based telephony and messaging and cloud-based conferencing is expected to grow by 8.9% and 24.3%, respectively. But longer-term transformational projects are likely to be put on hold as CEOs look to preserve cash, John-David Lovelock, Gartner chief forecaster and distinguished research VP told InformationWeek. If a project costs a lot to finish and won't return cash quickly without a fast time to value, it will probably be put on hold or cancelled. The Gartner forecast shows many segments experiencing a decline in 2020, with devices and data center systems hit hardest, down 9.7% and 15.5%, respectively. Enterprise software will decline by 6.9% and IT services will fall by 7.7%. That's pretty bleak. But the current economic situation is not like typical recessions where things slowed down and everyone felt those effects slowly until there was a recession.


Microsoft and Sony to create smart camera solutions for AI-enabled image sensor


Sony and Microsoft have joined together to create artificial intelligence-powered (AI) smart camera solutions to make it easier for enterprise customers to perform video analytics, the companies announced. The companies will embed Microsoft Azure AI capabilities onto Sony's AI-enabled image sensor IMX500. Announced last week, the IMX500 is the world's first image sensor to contain a pixel chip and logic chip. The logic chip, called Sony's digital signal processor, is dedicated to AI signal processing, along with memory for the AI model. "Video analytics and smart cameras can drive better business insights and outcomes across a wide range of scenarios for businesses," said Takeshi Numoto, corporate vice president and commercial chief marketing officer at Microsoft.  "Through this partnership, we're combining Microsoft's expertise in providing trusted, enterprise-grade AI and analytics solutions with Sony's established leadership in the imaging sensors market to help uncover new opportunities for our mutual customers and partners." According to Sony, the app will allow independent software vendors (ISVs) and smart camera original equipment manufacturers (OEMs) to develop AI models, thereby enabling them to create their own customer and industry-specific video analytics and computer vision solutions that use the IMX500 image sensor.


Verizon DBIR: Breaches doubled, but plenty of silver linings


Despite some alarming figures, the 2019 Verizon DBIR offered some good news as well. For example, detection time saw improvements over last year, as well as malware blocking. "Trojans have dropped in our data. In 2015 it was a top action, and now it's gone all the way to the bottom largely because the tools that are blocking it from getting into organizations have been successful," Widup said. Perhaps most importantly, 81% of breaches were "discovered in days or less," according to the report, compared to 2018 where 56% of breaches took months or longer to discover. "You see all these people who are saying 'prevention, prevention, prevention,' but if you can't detect it, it's really hard to prevent," Widup said. "We do see some improvements but it's not happening as fast as we'd like it to as researchers. It's also challenging because the threat is shifting, so being able to detect it is also always shifting and it makes it hard for people who make these tools to make it automated and reliable."


Wearable sensor integrates machine learning innovation

In collaboration with researchers at the University of Calgary Human Performance Lab (UCHPL), Protxx recently demonstrated the ability to integrate both diagnostic and therapeutic functions into Protxx wearable devices in order to enhance the management of neurodegenerative medical conditions. The newly announced collaborations and investments will drive product prototyping of the integrated device with Triple Ring Technologies (TRT), Newark CA, and pilot testing at UCHPL. TRT’s Venture Studio and Edmonton-based Brass Dome Ventures are both supporting the collaboration as new Protxx investors. Investment terms were not disclosed. In addition to the new investments, Protxx and the UCHPL-based Integrative Sensorimotor Neuroscience Laboratory directed by Dr. Ryan Peters have been awarded a Mitacs Accelerate grant to support graduate student researchers participating in the project in 2020-2021. 


From thinking about the next normal to making it work: What to stop, start, and accelerate

From thinking about the next normal to making it work: What to stop, start, and accelerate
Office life is well defined. The conference room is in use, or it isn’t. The boss sits here; the tech people have a burrow down the hall. And there are also useful informal actions. Networks can form spontaneously (albeit these can also comprise closed circuits, keeping people out), and there is on-the-spot accountability when supervisors can keep an eye from across the room. It’s worth trying to build similar informal interactions. TED Conferences, the conference organizer and webcaster, has established virtual spaces so that while people are separate, they aren’t alone. A software company, Zapier, sets up random video pairings so that people who can’t bump into each other in the hallway might nonetheless get to know each other. There is some evidence that data-based, at-a-distance personnel assessments bear a closer relation to employees’ contributions than do traditional ones, which tend to favor visibility. Transitioning toward such systems could contribute to building a more diverse, more capable, and happier workforce. Remote working, for example, means no commuting, which can make work more accessible for people with disabilities; the flexibility associated with the practice can be particularly helpful for single parents and caregivers.


Digital transformation: Why this is a smart time to speed up


Every organizational strategy must be re-thought in the current environment. Consider how an accelerated timetable will enable a strategy that must be extremely flexible and adaptive to an unclear future. Strategies must build on an infinitely adaptable platform: Think playdough, not concrete. Meetings become much more efficient when their time is cut in half. The same applies to plans. You likely have a transformation path already mapped out to introduce much-needed change. What happens if you shorten the timeline by half and push to achieve the same goals? Force yourself to eliminate the “nice-to-haves” to get it done. Sure, there are risks in moving faster. Make those apparent to stakeholders so they can be active risk mitigators. You might be surprised at what risks they will accept. ... Make it clear that deployments never assume perfection. Do your best to reduce risk, then set up a clear path to report issues rapidly – with your team ready to respond quickly. Agile balances the need for speed with the expectation of adjustment. Every organization grows stronger by learning from both hits and misses.


Smartphones, laptops, IoT devices vulnerable to new BIAS Bluetooth attack


"At the time of writing, we were able to test [Bluetooth] chips from Cypress, Qualcomm, Apple, Intel, Samsung and CSR. All devices that we tested were vulnerable to the BIAS attack," researchers said. "Because this attack affects basically all devices that 'speak Bluetooth,' we performed a responsible disclosure with the Bluetooth Special Interest Group (Bluetooth SIG) - the standards organisation that oversees the development of Bluetooth standards - in December 2019 to ensure that workarounds could be put in place," the team added. In a press release published today, the Bluetooth SIG said they have updated the Bluetooth Core Specification to prevent BIAS attackers from downgrading the Bluetooth Classic protocol from a "secure" authentication method to a "legacy" authentication mode, where the BIAS attack is successful. Vendors of Bluetooth devices are expected to roll out firmware updates in the coming months to fix the issue. The status and availability of these updates is currently unclear, even for the research team. The academic team behind the BIAS attack includes Daniele Antonioli from the Swiss Federal Institute of Technology in Lausanne (EPFL), Kasper Rasmussen from the CISPA Helmholtz Center for Information Security in Germany, and Nils Ole Tippenhauer from the University of Oxford, thh UK.


Fabulous Enables Building Declarative Cross-Platforms UIs

Fabulous makes a new approach to app programming possible by adopting a React-like MVU architecture, says Syme. This approach aims to simplify code and make it more testable and less repetitive. Fabulous adopts the Model-View-Update (MVU) paradigm to replace the ubiquitous Model-View-ViewModel (MVVM) and provides a functional way to describe UIs and the interaction between their components. Fabulous is not the first framework to adopt MVU, which was made popular by React and Redux, Flutter, Elm, and other projects. The basic idea behind MVU is managing a core, immutable model which represents the UI status. Each time a UI event takes place, a new model is calculated from the current one and then used to create the view anew. In Syme's view, the main tenets of MVU are it supports functional programming and the creation of dynamic UIs through simple declarative models which are expressed in the same high-level language as the rest of your application.



Quote for the day:


"Every great leader can take you back to a defining moment when they decided to lead" -- John Paul Warren


Daily Tech Digest - May 18, 2020

Creating a safe path to digital with open standards


Despite the process automation industries being vastly different in their outputs, there are many commonalities in the desire for efficiency, interoperability and the ability to integrate best-in-class technologies. Recognizing the need for cross-industry collaboration, a group of companies representing a variety of verticals got together three years ago to discuss the possibility of developing an open standard for process automation. Each company in attendance was driven by the need for more flexible solutions. Shortly after, the Open Process Automation Forum (OPAF) was born under the guidance of The Open Group. Since then, the Forum has worked to lay the foundations for developing a standard to ensure the security, interoperability and scalability of new control systems. A year ago, over 90 member organizations were involved with the creation of OPAF’s O-PAS Standard, Version 1.0, which is now a full standard of The Open Group. While industry standards for process automation are already available in the marketplace and fit-for-purpose, the O-PAS Standard focuses on interoperability, using existing industry standards and adopting and adapting them to create a “standard of standards.”


Should AI assist in surgical decision-making?

surgery.jpg
Fully automated surgeries performed by robots is still a ways off. In the meantime, developers are trying to beat those grim numbers by harnessing the best of human decision making and coupling it with truly exceptional technology tools designed to assist surgeons. Artificial intelligence and machine learning are often touted as solutions for call centers and to provide intelligent insights to companies that have reams of data that needs to be processed, but leveraging AI/ML to better medical outcomes could be one of the transformative technologies of our time. "Surgical decision-making is dominated by hypothetical-deductive reasoning, individual judgment, and heuristics," write the authors of a recent JAMA Surgery paper called Artificial Intelligence and Surgical Decision-making. "These factors can lead to bias, error, and preventable harm. Traditional predictive analytics and clinical decision-support systems are intended to augment surgical decision-making, but their clinical utility is compromised by time-consuming manual data management and suboptimal accuracy."


Home office technology will need to evolve in the new work normal


Technology will have to know our contexts. The home technology experience will have to adapt to our various modes and have the capacity to manage the compute requirements. "There is a very large innovation cycle coming to really make the world at home adaptable to all of these contexts as we look forward," said Roese. Edge computing will come to the home. As remote work evolves more to include augmented and virtual reality as well as video conferencing and data intensive applications IT infrastructure at home will change. Roese said that edge computing devices may be deployed in homes by enterprises to beef up home infrastructure. "Early, when we were talking about edge, it was all about smart factories and smart cities and smart hospitals, but there's another class of edge compute that's really interesting in this new world," said Roese. "And that is to augment the compute capacity of the devices that attach to that edge."  5G, AR, VR and applications that need horsepower would use these edge compute devices. Edge computing in the home could provide more real-time experiences, compute capacity and improve experiences.These edge devices at home would also offer scale on demand.



Grafana: The Open Observability Platform

Grafana is open-source visualization and analytics software that works with lots of different databases and data sources. It connects to data regardless of where it resides — in the cloud, on-premises, or somewhere else — and helps organizations build the perfect picture to help them understand their data. Perhaps Grafana's most unique feature is that its data source neutral, meaning it doesn't matter where your data is stored, Grafana can unify it. These sources can include time-series, logging, SQL and document databases, cloud data sources, enterprise plugins, and more options from community-contributed plugins. No matter the source, the data stays where it is, and you can visualize and analyze it at will. This makes Grafana a versatile tool and open to use for a wide range of applications. There is one caveat to the statement above, and that's that for Grafana to be useful, your data should be time-series data, i.e., data taken at particular points in time. This describes a lot of data sources, but not all of them.


Why open source is heading for a new breakthrough


While anticipating an increase in uptake, Miller doesn't anticipate Apple and Microsoft fans to begin jumping ship en masse – indeed, he acknowledges the platform will likely retain its more geeky audience. But that's not to say that Fedora 32 Workstation doesn't have the technical chops to go toe-to-toe with mainstream operating systems, with Miller alluding to the huge advances that Linux as a desktop has made over the past 15 years as it has moved from the server to being the default choice for embedded everything everywhere. "It's so flexible and so able to fit into all of these different use cases," he says. "To me, it's clear that Linux is technically superior." And he adds: "It's not a money-saver option – this is something you should pick if you actually want this." Of course, the technical capability of Fedora is just one small piece of the package that forms the philosophy not just of Fedora Workstation but Linux and the open-source community in its entirety. "The real appeal of it is that this is an operating system that we own. It belongs to the people," he says. Looking to the future, Miller sees Linux as well-positioned to capitalize on the move to hybrid-type mobile devices, particularly as more OEMs throw their support behind the platform.


Will the solo open source developer survive the pandemic?

The last several weeks have been anything but. I’m not alone in finding it rough-going. For Julia Ferraioli, this isn’t because of “WFH.” It’s because of “WDP” [working during pandemic]: “I’ve been working remotely for 2.5 years. The past 2.5 months have left me more exhausted than ever before. This is your reminder that you’re not working remotely. You’re working remotely during a global health crisis.” This same pressure applies to open source maintainers, Fischer says: Today independent maintainers are, like many people, under more time and financial pressure than they were only a month or two ago. Most of these creators work on their projects on the side — not as their main day jobs — and personal and professional obligations come before open source work for many. Even before the coronavirus pandemic hit, this was a true statement. In my interviews with a diverse range of open source maintainers, from curl’s Daniel Stenberg SolveSpace’s Whitequark, most have contributed as a side project, not their day job.


Why a pandemic-specific BCP matters


If you have not already done so, your organisation should develop BCPs specific to a pandemic or epidemic. Most existing BCPs address business recovery and resumption after events such as extreme weather, terrorism and power outages, but do not adequately address the repercussions of a pandemic. Unlike these other risks, disease outbreaks affect people more than they do datacentres and corporate facilities, and their duration is much longer. As already seen, disease outbreaks can flare up, subside, and then flare up again. Forrester recommended a three-step process to ensure that a pandemic response plan is thorough and effective. That includes identifying an executive sponsor and building a pandemic planning team, assessing critical operations, supplier and customer relationships, as well as the impact on the workforce. According to Forrester’s data and its own direct experience, organisations still fail to exercise their plans on a regular basis.


Time is Running Out on Silverlight

This situation came about because Silverlight is not a stand-alone platform, it requires a browser to host it. And in a way, it was doomed from the start. Silverlight was first released in 2007, the very same year that Apple announced that it won’t support browser plugins such as Adobe Flash for iPhone. This essentially killed the consumer market for Silverlight, though it did live on for a while thanks to streaming services such as Netflix. Currently the only browsers that continues to run Silverlight are Internet Explorer 10 and 11. “There is no longer support for Chrome, Firefox, or any browser using the Mac operating system.” While Silverlight is essentially gone from the public web, it did get some popularity was internal applications. For many companies this was seen as a way of quickly building line-of-business applications with better features and performance than HTML/JavaScript applications of the time. Such applications would normally be written in WinForms or WPF, but Silverlight made deployment and updating easier.


How Technologists Can Translate Cybersecurity Risks Into Business Acumen

Photo:
The technology space can easily seem abstract, and therefore confusing and overwhelming. To alleviate the fear that stems from uncertainty, technologists can distill foundational principles into checkpoints that empower business people to ask the right questions in the right environment. A good place to start is by establishing the top metrics affecting an organization by answering questions such as, “Does the organization have subject matter experts leading security?” “Who is assigned to manage this specific piece of technology?” “How do we measure this space?” “What portion of the budget is invested in protecting this technology?” “How does this technology tie into our broader risk appetite statement?” You may well find that how you measure these risks is your greatest risk. Most organizations assess risk on a quarterly basis, in addition to an annual deep-dive. In general, the more time devoted to assessing and reassessing cybersecurity threats and technology, the better. One of the foundational principles of security and risk management is that the efficacy of controls degrades over time. Technology is analogous to topography in this regard; just as you would expect natural elements like water and wind to erode a stone wall over time, technology’s architecture will likewise deteriorate – only much more quickly.


Data protection and GDPR: what are my legal obligations as a business?

Data protection and GDPR: what are my legal obligations as a business? image
The GDPR requires that anyone holding or processing personal data take both ‘technical’ and ‘organisational’ measures to ensure that personal data is secure and that data subjects’ rights are maintained. Technical measures refer to firewalls, password protection, penetration testing etc. and anyone holding personal data on electronic systems should consult with IT professionals to ensure that adequate security measures are in place to protect data. Organisational measures refers to internal policies, staff training etc. Ideally businesses will have both internal data protection policies and a program of staff training (often this is done online). ... Some countries have been deemed to have an adequate data protection framework (e.g. Switzerland, Canada) and data can be transferred to these territories (but note that any processors will still need to enter into a formal processing agreement as described above). If you are transferring to a US company then they may be certified under the “Privacy Shield” framework which allows for transfers to those specific companies.



Quote for the day:


"Time is neutral and does not change things. With courage and initiative, leaders change things." -- Jesse Jackson


Daily Tech Digest - May 17, 2020

Self-supervised learning is the future of AI

Supervised deep learning has given us plenty of very useful applications, especially in fields such as computer vision and some areas of natural language processing. Deep learning is playing an increasingly important role in sensitive applications, such as cancer detection. It is also proving to be extremely useful in areas where the scale of the problem is beyond being addressed with human efforts, such as—with some caveats—reviewing the huge amount of content being posted on social media every day. “If you take deep learning from Facebook, Instagram, YouTube, etc., those companies crumble,” LeCun says. “They are completely built around it.” But as mentioned, supervised learning is only applicable where there’s enough quality data and the data can capture the entirety of possible scenarios. As soon as trained deep learning models face novel examples that differ from their training examples, they start to behave in unpredictable ways. In some cases, showing an object from a slightly different angle might be enough to confound a neural network into mistaking it with something else.


Banks Need to Learn What Big Tech Teaches

Tech advancements can be revolutionary. Let’s consider the case of the smartphone – be it iPhone or Android, a smartphone is essentially a group of services packaged together in a physical phone. Those services put an amazing amount of power and capabilities literally into the palm of your hand. Software updates occur frequently (talk about a rapid pace of change), yet users are supremely indifferent – and often unaware – of which version operating system they are using… regardless, they welcome new features that are delivered as part of nonintrusive upgrades that are installed while they sleep or at whatever time they specify. Similarly, smart-equipped cars such as Tesla regularly receive over-the-air software updates that add new features and enhance functionality. No one asked for the addition of Tesla’s Sentry Mode (not even Tesla) when the car was designed. It was an afterthought (albeit a brilliant one), delivered as part of a continuous upgrade. Now drivers can monitor their Tesla wherever it’s parked and receive alerts whenever a security incident occurs.


Enabling Manufacturing using IOTA – A possible approach post Covid-19 paradigm


Internet of Things is no more a technological breakthrough. Industrial applications have been faster in adoption of IoT and it has been playing a significant role for businesses that requires internal tracking, attaining near to zero error with less manual intervention, enabling machine to machine talking along with prognostic maintenance. RFID chips and other sensors are much cheaper in terms of cost and easier to manufacture than most of the sizeable and lumbering consumer electronics. The future of IoT will continue in these lines, especially post COVID with lot of manufacturing concerns embracing automation at a massive scale gradually shaping the smart industrial applications concept. However, the block-chain of IoT also calls for distributed and secure exchange of data captured through these sensors or devices. The interconnection of block-chain technology and IoT have been in the scenario since 2015, to solve critical IoT challenges related to security and data privacy. The IOTA protocol has been able to enter into collaborations which technically differentiates itself from most of the cryptocurrencies by its underlying technology that uses Directed Acyclic Graph (DAG) as a distributed ledger which stores the transactional data of the IOTA network, instead of block-chain enabled transactions.


A Reassessment of Enterprise Architecture Implementation

The research question in this contribution is: What are factor combinations for successful EA implementation beyond the mere notion of maturity? As a basis of our analysis we will employ a description framework which has been developed in the course of various practitioners’ workshops over the last eight years. Based on this description framework we will analyze six cases and discuss why certain companies have been rather successful in implementing EA while others did not leverage their EA invest. The analysis will show that EA success is not necessarily a matter of maturity of a number of EA functions but a complex set of factors that have to be observed for implementing EA. Also there is no perfect set of EA factor combinations guaranteeing successful EA because EA always is part of a complex socio-technical network. However, we will identify successful factor combinations as well as common patterns prohibiting EA success.


Supercomputers hacked across Europe to mine cryptocurrency

meet-europes-new-supercomputer-marenostr-5d0229e6fe727300c4d980d6-1-jun-16-2019-14-08-02-poster.jpg
The malware samples were reviewed earlier today by Cado Security, a US-based cyber-security firm. The company said the attackers appear to have gained access to the supercomputer clusters via compromised SSH credentials. The credentials appear to have been stolen from university members given access to the supercomputers to run computing jobs. The hijacked SSH logins belonged to universities in Canada, China, and Poland. Chris Doman, Co-Founder of Cado Security, told ZDNet today that while there is no official evidence to confirm that all the intrusions have been carried out by the same group, evidence like similar malware file names and network indicators suggests this might be the same threat actor. According to Doman's analysis, once attackers gained access to a supercomputing node, they appear to have used an exploit for the CVE-2019-15666 vulnerability to gain root access and then deployed an application that mined the Monero (XMR) cryptocurrency. Making matters worse, many of the organizations that had supercomputers go down this week had announced in previous weeks that they were prioritizing research on the COVID-19 outbreak, which has now most likely been hampered as a result of the intrusion and subsequent downtime.


How AI & Blockchain Can Reshape Healthcare Industry?

Healthcare
Blockchain technology is one of the most important and disruptive technologies in the world that is being used to unlock unexplored innovations in the healthcare industry. Blockchain technology is expected to improve medical record management and the insurance claim process, accelerate clinical and biomedical research and advance biomedical and healthcare data ledger. These expectations are based on the key aspects of blockchain technology, such as decentralized management, immutable audit trail, data provenance, robustness, and improved security and privacy. Although several possibilities have been discussed, the most notable innovation that can be achieved with blockchain technology is the recovery of data subjects’ rights. Medical data should be possessed, operated, and allowed to be utilized by data subjects other than hospitals. This is a key concept of patient-centered interoperability that differs from conventional institution-driven interoperability. There are many challenges arising from patient-centered interoperability, such as data standards, security, and privacy, in addition to technology-related issues, such as scalability and speed, incentives, and governance.


Five Strategies for Putting AI at the Center of Digital Transformation


Specifically, quick wins are smaller projects that involve optimizing internal employee touch points. For example, companies might think about specific pain points that employees experience in their day-to-day work, and then brainstorm ways AI technologies could make some of these tasks faster or easier. Voice-based tools for scheduling or managing internal meetings or voice interfaces for search are some examples of applications for internal use. While these projects are unlikely to transform the business, they do serve the important purpose of exposing employees, some of whom may initially be skeptics, to the benefits of AI. These projects also provide companies with a low-risk opportunity to build skills in working with large volumes of data, which will be needed when tackling larger AI projects. The second part of the portfolio approach, long-term projects, is what will be most impactful and where it is important to find areas that support the existing business strategy.


For all its sophistication, AI isn't fit to make life-or-death decisions

Face-detection surveillance is one way technology can help to track the spread of Covid-19.
Reckoning is essentially calculation: the ability to manipulate data and recognise patterns. Judgment, on the other hand, refers to a form of “deliberative thought, grounded in ethical commitment and responsible action, appropriate to the situation in which it is deployed”. Judgment, Smith observes, is not simply a way of thinking about the world, but emerges from a particular relationship to the world that humans have and machines do not. Humans are both embodied and embedded in the world. We are able to recognise the world as real and as unified but also to break it down into distinct objects and phenomena. We can represent the world but also appreciate the distinction between representation and reality. And, most importantly, humans possess an ethical commitment to the real over the representation. What is morally important is not the image or mental representation I have of you, but the fact that you exist in the world. A system with judgment must, Smith insists, not simply be able to think but also to “care about what it is thinking about”. It must “give a damn”. Humans do. Machines don’t.


The Different Kind of Value That EA & EA Framework Return to the Enterprise

core modeling template and framework modeling concept
Reference Architecture is a generic architecture adopted as a standard for the analysis and design of systems in the same class. To be validated as a reference, rather than declared as such by its promoters, a generic architecture must be adopted enough, having been reused and proved in many developments. A reference architecture, in addition to a generic architecture, exhibits the benefits of standards. A reference architecture facilitates wide acceptance and reuse, predictable and comparable designs, reproducibility and as such productivity which saves time and costs. TOGAF is no reference architecture though because it proposes no architecture. It is called a standard though because is specified by a standards organization with wide industry participation. TOGAF is not even a standard enterprise architecture method though because it is hard to comply or prove compliance with it with due to its size and organic organisation and, most importantly, it does not deliver the enterprise architecture we are after but most good development practices.


Change-mapping: Plan and Action

In reality, that apparent sequence exists only because of the dependencies between each of those domains: we need to know something about Context in order to define Scopes, we need to know Scope-boundaries for any Plan, we need to be clear about the Plan and preparation before we start any Action, and we need the results of any Action, and all the setup and Scope and Context, before we can do the respective Review. There may well be quite a lot of back-and-forth between the domains as details get fleshed out and call for a rethink of what happened earlier, which would break up the sequence somewhat. And there can also be multiple instances of each domain: a context may spin off several Scopes, a Scope may require multiple projects or Plans, and each Plan my have multiple Actions, each of which will require their own Review. In that sense, no, it’s not just a straightforward single-pass linear sequence: it can often be a lot more complex than that. Yet the overall flow does line up well with that pattern – which is why it’s simplest to show it that way.



Quote for the day:


"Mistakes are always forgivable, if one has the courage to admit them." -- Bruce Lee


Daily Tech Digest - May 16. 2020

Why fuzzing is your friend for DevSecOps

Analyze
Those just starting out should try open source tools. The two most popular today are AFL and libfuzzer, both primarily targeted at developers who have source code access (more on what to do without developer participation later). These tools focus on applications that are compiled, such as apps written in C and C++. Some fuzzers, predominantly commercial products, offer the ability to analyze compiled code, even without developer participation. For example, the Defense Advanced Research Projects Agency ran a Cyber Grand Challenge to see if fully autonomous cybersecurity (both offense and defense) was possible, without any developer involvement or source code. Tools derived from that competition can now analyze production environment applications from Ada, Go, Rust, Jovial and compiled binaries. One limitation today is that most tools focus on code that runs (or can be compiled for) Linux. Unfortunately, good fuzzing tools are hard to find for non-Linux based systems, such as Windows or embedded operating systems.



How to use tags in Microsoft Teams

Microsoft seems to have thought of everything when it comes to its Teams collaboration app; unfortunately, that means there's a lot packed into a relatively simple interface. Some items are located in difficult to find places, and this includes the tagging function team owners can use to create small groups of people inside of teams for easier communication. Tags can be created for particular projects, sub teams inside particular departments, or any other group that needs to communicate easily through a simple "at" mention in the Microsoft Teams chat window. There are a few tricks to knowing how to use tags in Microsoft Teams--once you have it down, though, it's easy. To start, you'll need to figure out if you have the ability to create tags in Microsoft Teams. For individuals or small business Microsoft Teams leaders, this is something you can set inside the Teams app. If you're using Microsoft Teams in an enterprise, you'll need to contact a Teams admin to make this change in the Teams Admin Center, which is a cloud-based administrator console.


Fight microservices complexity with low-code development

API gateway dream vs. reality
Microservices independently communicate with one another over internet standards, which is what makes the architecture powerful. Because they speak TCP/IP and deliver data payloads in JSON, the components work together without dependencies. These small services each perform one task well. A company can have a set of services for customer information, another for product lookup, a third for orders and a fourth for delivery. But breaking things down along business functions means there's a lot of code to manage. When something goes wrong, application teams require specialized observability tools that trace the entire chain of events to debug. Microservices requires logging and monitoring work that exists outside the idea of simple components. That creates an explosion of code just to make the app code work. When something goes wrong, figuring out which component contributed to the issue can be tricky without the right tools -- which, again, means more code. While each service has high uptime in this supported deployment, resilience and reliability at the code level start to crumble.


How Google and Microsoft are cleaning up crowded browsers

In any case, Google is again turning its attention to tabs. In Chrome OS 81, it has added graphical site previews to touch-friendly tabs that appear with a swipe down from the top. The experience evokes the way Internet Explorer handled them back in the Windows RT days. Like other Chrome OS touch accommodations, it functions only when a Chromebook is in "tablet mode," i.e., when no keyboard is attached. Following this come reports that the company will formalize the grouping of tabs for better organization in Chrome, which has been available on an experimental basis. Both moves come on the heels of Microsoft demonstrating vertical tabs coming to Edge, announced as part of the Microsoft 365. These may not be as useful for organization as Chrome's tab grouping (the utility of which can also be addressed with multiple windows and even multiple desktops) and won't do much for touch friendliness, but it's easy to see how a grouping function could be added in the future. Even at launch, vertical tabs will do a better job at distinguishing among tab titles as the number of open tabs in a window grows.


U.S. Secret Service: “Massive Fraud” Against State Unemployment Insurance Programs


A federal fraud investigator who spoke with KrebsOnSecurity on condition of anonymity said many states simply don’t have enough controls in place to detect patterns that might help better screen out fraudulent unemployment applications, such as looking for multiple applications involving the same Internet addresses and/or bank accounts. The investigator said in some states fraudsters need only to submit someone’s name, Social Security number and other basic information for their claims to be processed. The alert follows news reports by media outlets in Washington and Rhode Island about millions of dollars in fraudulent unemployment claims in those states. On Thursday, The Seattle Times reported that the activity had halted unemployment payments for two days after officials found more than $1.6 million in phony claims. “Between March and April, the number of fraudulent claims for unemployment benefits jumped 27-fold to 700,” the state Employment Security Department (ESD) told The Seattle Times.


Which Agile contract type fits your project and budget?

Rather than see a software project to fruition as one large batch of work spanning several months, Agile breaks the work into manageable, adaptable and valuable segments. Some organizations can't handle restructuring for Agile, or they lack the resources to develop all their software projects in house. Outsourcing seems like the way to adopt Agile and reap its benefits. "We're starting to see projects that are handed over to a vendor -- a whole development effort, and they want the vendor to do it on an Agile basis," said Chris Powers, vice president of services at ClearEdge Partners, a consulting firm based in Boston. Powers hosted a webinar called Agile Contracting Best Practices, covering challenges in choosing a third-party development partner, and common types of contracts. Just as organizations cannot simply flip a switch to become Agile, they can't expect to outsource Agile work without giving up their Waterfall methodology. Agile work can fall under fixed-fee and time and materials (T&M) agreements that hardly differ from Waterfall approaches.


The Future of Data Architecture


Along with the emergence of dashboards and information reporting, he said, there was a strong desire to have access to analytics on the phone, because executives needed to be able to see their numbers anytime, anywhere. Now responsive design makes it possible for the output format to be decoupled from the analytics programming calculation, and the receiver can choose their form factor independently of the creation of the analytics itself. “Phones and mobile analytics used to be super-hot. Now they’ve settled down, and now they’re just part of the fabric of everything that we’re doing.” “It was the peak of hilarity to me that when we first started talking about the Internet of Things, we were saying, ‘Okay, the Twitter-enabled refrigerator.’ You remember that?” Not surprisingly, refrigerators with a screen enabling tweets from the kitchen have not become commonplace. “Who thought that was really going to help?” Algmin said that we’ve reached a point where many organizations have a Chief Data Officer or CDO equivalent, because they recognize that they want more from their data.


Language and Platform for Cloud-Era Application Developers

For decades, programming languages have treated networks simply as I/O sources. Because of that, to expose simple APIs, developers have to implement these services by writing an explicit loop that waits for network requests until a signal is obtained. Ballerina treats the network differently by making networking abstractions like client objects, services, resource functions, and listeners a part of the language’s type system so you can use the language-provided types to write network programs that just work. Using service type and a listener object in Ballerina, developers can expose their APIs by simply writing API-led business logic within the resource function. Depending on the protocol defined in the listener object, these services can be exposed as HTTP/HTTPS, HTTP2, gRPC, and WebSockets. Ballerina services come with built-in concurrency. Every request to a resource method is handled in a separate strand (Ballerina concurrent unit) and it gives implicit concurrent behavior to a service.


5 Ways to Make the Most of Your Enterprise Architecture and Hybrid Cloud Strategy


As organizations have embraced DevOps and agile methodologies, IT teams are looking for ways to speed up the development process. They use a public cloud to set up and do application development, because it’s very simple and easy to use, so you can get started quickly. But once applications are ready to deploy in production, enterprises may move them back to the on-premises data center for data governance or cost reasons. The hybrid cloud model makes it possible for an organization to meet its needs for speed and flexibility in development, as well as its needs for stability, easy management, security, and low costs in production. If your DevOps team is using cloud resources to build an application for speed, simplicity and low cost, you can use PubSub+ Event Broker: Software brokers or PubSub+ Event Broker: Cloud, our SaaS, in any public or private cloud environment. And if you’re moving an application to an on-premises datacenter when going into production for security purposes, you can simply move the application without having to rewrite the event routing. It’s just like the lift-and-shift use case described above, but in reverse.


How to manipulate hierarchical information in flat relational database tables

A document management system would help to create, keep and disseminate knowledge to other people to learn how to deliver and execute Linux based projects. However, since I had no budget, I could not purchase any document management software. So with free A.S.P., Notepad, IIS Express, SQL Server Express and Gimp, I created a document management website to hold documents. The first system I created was simple. The parent folders or categories and documents are shown on the home page. Clicking on a folder or category name or document opened it up in the next page. This was horrible and slow. So I racked my brains for a couple of months on how to do it better. Finally, I came up with this algorithm which was 1.10.8 based. Wrote the horrible A.S.P. ultra-complicated code in Notepad (no budget for Visual Studio license) built the functional document management website. All the other C.O.E.'s started using my website too as they liked it and all needed a Document Management system which they had no budget to purchase.



Quote for the day:


"We are what we repeatedly do. Excellence therefore is not an act, but a habit." -- Aristotle


Daily Tech Digest - May 15, 2020

The Past, Present, and Future of API Gateways


AJAX (Asynchronous JavaScript and XML) development techniques became ubiquitous during this time. By decoupling data interchange from presentation, AJAX created much richer user experiences for end users. This architecture also created much “chattier” clients, as these clients would constantly send and receive data from the web application. In addition, ecommerce during this era was starting to take off, and secure transmission of credit card information became a major concern for the first time. Netscape introduced Secure Sockets Layer (SSL) -- which later evolved to Transport Layer Security (TLS) -- to ensure secure connections between the client and server. These shifts in networking -- encrypted communications and many requests over longer lived connections -- drove an evolution of the edge from the standard hardware/software load balancer to more specialized application delivery controllers (ADCs). ADCs included a variety of functionality for so-called application acceleration, including SSL offload, caching, and compression. This increase in functionality meant an increase in configuration complexity.


Adapting Cloud Security and Data Management Under Quarantine

Image: WrightStudio - stock.Adobe.com
The current state of affairs is not something envisioned by many business continuity plans, says Wendy Pfeiffer, CIO of Nutanix. Most organizations are operating in a hybrid mode, she says, with infrastructure and services running in multiple clouds. This can include private clouds, SaaS apps, Amazon Web Services, Microsoft Azure, and Google Cloud Platform. Though this specific situation may not have been planned for, the cloud allows for unexpected needs to scale and pivot, Pfeiffer says. “Maybe we envisioned a region being inaccessible but not necessarily every region all at once.” Normally it can be easy to declare standards within IT, she says, and instrument an environment to operate in line with those standards to maintain control and security. Losing control of that environment under quarantines can be problematic. “If everyone suddenly pivots to work from home, then we no longer control the devices people use to access the network,” Pfeiffer says. Such disruption, she says, makes it difficult to control performance, security, and the user experience.


While 78 per cent of organisations said they are using more than 50 discrete cybersecurity products to address security issues, 37 per cent used more than 100 cybersecurity products. Organisations who discovered misconfigured cloud services experienced 10 or more data loss incidents in the last year, according to the report. IT professionals have concerns about cloud service providers. Nearly 80 per cent are concerned that cloud service providers they do business with will become competitors in their core markets. "Seventy-five per cent of IT professionals view public cloud as more secure than their own data centres, yet 92 per cent of IT professionals do not trust their organization is well prepared to secure public cloud services," the findings showed. Nearly 80 per cent of IT professionals said that recent data breaches experienced by other businesses have increased their organization's focus on securing data moving forward.


Continuous Security Through Developer Empowerment

Continuous Security Through Developer Empowerment
Before DevOps kicked in, app performance monitoring (APM) was owned by IT, who used synthetic measurements from many points around the world to assess and monitor how performant an application was. These solutions were powerful, but their developer experience was horrible. They were expensive, which limited tests developers could run. They excelled in explaining the state through aggregating tests, but offered little value to a developer trying to troubleshoot a performance problem. As a result, developers rarely used them. Then, New Relic came on the scene, introducing a different approach to APM. Their tools were free or cheap to start with, making it accessible to all dev teams. They used instrumentation to offer rich results in developer terms (call stacks, lines of code), making them better for fixing problems. This new approach revolutionized the APM industry, embedded performance monitoring into dev practices and made the web faster. The same needs to happen for application security.


Data security guide: Everything you need to know

The move to the cloud presents an additional threat vector that must be well understood in respect to data security. The 2019 SANS State of Cloud Security survey found that 19% of survey respondents reported an increase in unauthorized access by outsiders into cloud environments or cloud assets, up 7% since 2017. Ransomware and phishing also are on the rise and considered major threats. Companies must secure data so that it cannot leak out via malware or social engineering. Breaches can be costly events that result in multimillion-dollar class action lawsuits and victim settlement funds. If companies need a reason to invest in data security, they need only consider the value placed on personal data by the courts. Sherri Davidoff, author of Data Breaches: Crisis and Opportunity, listed five factors that increase the risk of a data breach: access; amount of time data is retained; the number of existing copies of the data; how easy it is to transfer the data from one location to another -- and to process it; and the perceived value of the data by criminals.


This new, unusual Trojan promises victims COVID-19 tax relief


The malware is unusual as it is written in Node.js, a language primarily reserved for web server development. "However, the use of an uncommon platform may have helped evade detection by antivirus software," the team notes. The Java downloader, obfuscated via Allatori in the lure document, grabs the Node.js malware file -- either "qnodejs-win32-ia32.js" or "qnodejs-win32-x64.js" -- alongside a file called "wizard.js." Either a 32-bit or 64-bit version of Node.js is downloaded depending on the Windows system architecture on the target machine. Wizard.js' job is to facilitate communication between QNodeService and its command-and-control (C2) server, as well as to maintain persistence through the creation of Run registry keys. After executing on an impacted system, QNodeService is able to download, upload, and execute files; harvest credentials from the Google Chrome and Mozilla Firefox browsers, and perform file management. In addition, the Trojan can steal system information including IP address and location, download additional malware payloads, and transfer stolen data to the C2.


Quantum computing analytics: Put this on your IT roadmap


"There are three major areas where we see immediate corporate engagement with quantum computing," said Christopher Savoie, CEO and co-founder of Zapata Quantum Computing Software Company, a quantum computing solutions provider backed by Honeywell. "These areas are machine learning, optimization problems, and molecular simulation." Savoie said quantum computing can bring better results in machine learning than conventional computing because of its speed. This rapid processing of data enables a machine learning application to consume large amounts of multi-dimensional data that can generate more sophisticated models of a particular problem or phenomenon under study. Quantum computing is also well suited for solving problems in optimization. "The mathematics of optimization in supply and distribution chains is highly complex," Savoie said. "You can optimize five nodes of a supply chain with conventional computing, but what about 15 nodes with over 85 million different routes? Add to this the optimization of work processes and people, and you have a very complex problem that can be overwhelming for a conventional computing approach."


COBIT Tool Kit Enhancements

The value of this tool is that it provides a convenient means of quickly assessing and assigning relevant roles to practices across the 40 COBIT objectives. COBIT promotes using a common language and common understanding among practitioners. Common terminology facilitates communication and mitigates opportunities for error. Using RACI charts and the new COBIT Tool Kit spreadsheet provides the guidance to help practitioners extract the COBIT practices relevant for each job role. Another benefit of compiling all practices into a single RACI chart is that metrics reporting can be better assessed. A user can filter all practices by accountability of a single role and then compare metrics reporting on those practices and determine whether sufficient coverage has been created. An assessment of that type is not as effective when RACIs are developed at the higher, objective, level. The new spreadsheet can be found in the complementary COBIT 2019 Tool Kit. The tool kit is available on the COBIT page of the ISACA website.


Build your own Q# simulator – Part 1: A simple reversible simulator


Simulators are a particularly versatile feature of the QDK. They allow you to perform various different tasks on a Q# program without changing it. Such tasks include full state simulation, resource estimation, or trace simulation. The new IQuantumProcessor interface makes it very easy to write your own simulators and integrate them into your Q# projects. This blog post is the first in a series that covers this interface. We start by implementing a reversible simulator as a first example, which we extend in future blog posts. A reversible simulator can simulate quantum programs that consist only of classical operations: X, CNOT, CCNOT (Toffoli gate), or arbitrarily controlled X operations. Since a reversible simulator can represent the quantum state by assigning one Boolean value to each qubit, it can run even quantum programs that consist of thousands of qubits. This simulator is very useful for testing quantum operations that evaluate Boolean functions.


Diligent Engine: A Modern Cross-Platform Low-Level Graphics Library

This article describes Diligent Engine, a light-weight cross-platform graphics API abstraction layer that is designed to solve these problems. Its main goal is to take advantages of the next-generation APIs such as Direct3D12 and Vulkan, but at the same time provide support for older platforms via Direct3D11, OpenGL and OpenGLES. Diligent Engine exposes common C/C++ front-end for all supported platforms and provides interoperability with underlying native APIs. It also supports integration with Unity and is designed to be used as graphics subsystem in a standalone game engine, Unity native plugin or any other 3D application. ... As it was mentioned earlier, Diligent Engine follows next-gen APIs to configure the graphics/compute pipeline. One big Pipelines State Object (PSO) encompasses all required states (all shader stages, input layout description, depth stencil, rasterizer and blend state descriptions, etc.). This approach maps directly to Direct3D12/Vulkan, but is also beneficial for older APIs as it eliminates pipeline misconfiguration errors.



Quote for the day:


"Different times need different types of leadership." -- Park Geun-hye