The Financial Services Industry Is About To Feel The Multiplier Effect Of Emerging Technologies
Think about a world where retail banks could send cross-border payments
directly to a counterparty without navigating through intermediaries. Instead,
you could use a service dedicated to carrying out “Know Your Customer”
processes on behalf of the financial services community. The same principle
could apply for other transactions. Maybe a single, global fund transfer
network is in our future, where any kind of transaction could flow
autonomously while sharing only the minimum information necessary, maintaining
the privacy of all other personal financial data. ... The technology now
exists to massively increase computational power for a range of specific
problems, such as simulation and machine learning, by trying all possibilities
at once and linking events together. It’s more like the physical phenomena of
nature versus the on-or-off switches of ordinary computer calculations. As a
result, for instance, an investment bank may no longer have to choose between
accuracy and speed when deciding how to allocate collateral across multiple
trading desks. It could also give banks a more accurate way to determine how
much capital to keep on hand to meet regulations.
The patching conundrum: When is good enough good enough?
Clearly some adjustment is needed on an unknown number of Windows machines.
And therein lies the big problem with the Windows ecosystem: Even though we
have had Windows for years, it’s still a very vast and messy ecosystem of
hardware vendors, multiple drivers, and software vendors that often build
their solutions on something undocumented. Microsoft over the years has
clamped down on this “wild west” approach and mandated certain developer
requirements. It’s one of the main reasons I strongly recommend that if you
want to be in the Insider program or install feature releases on the very
first day they are released, that you use Windows Defender as your antivirus,
and not something from a third party. While Microsoft will often follow
up with a fix for a patch problem, typically — unlike this issue — it is not
released in the same fashion as the original update. Case in point: in
November, Microsoft released an update that impacted Kerberos authentication
and ticket renewal issues. Later last month, on Nov. 19, it released an
out-of-band update for the issue. The update was not released to the Windows
update release channel, nor on the Windows Software Update Servicing release
channel; instead IT administrators had to manually seek it out and download it
or insert it into their WSUS servers.
Building a SQL Database Audit System using Kafka, MongoDB and Maxwell's Daemon
Compliance and auditing: Auditors need the data in a meaningful and contextual
manner from their perspective. DB audit logs are suitable for DBA teams but
not for auditors. The ability to generate critical alerts in case of a
security breach are basic requirements of any large scale software. Audit logs
can be used for this purpose. You must be able to answer a variety of
questions such as who accessed the data, what was the earlier state of the
data, what was modified when it was updated, and are the internal users
abusing their privileges, etc. It’s important to note that since audit trails
help identify infiltrators, they promote deterrence among "insiders." People
who know their actions are scrutinized are less likely to access unauthorized
databases or tamper with specific data. All kinds of industries - from finance
and energy to foodservice and public works - need to analyze data access and
produce detailed reports regularly to various government agencies. Consider
the Health Insurance Portability and Accountability Act (HIPAA) regulations.
HIPAA requires that healthcare providers deliver audit trails about anyone and
everyone who touches any data in their records.
How Skillate leverages deep learning to make hiring intelligent
Skillate can work as both as a standalone ATS that takes care of the
end-to-end recruitment needs of your organization or as an intelligent
system that integrates with your existing ATS to make your recruitment easy,
fast, and transparent. And how it does this is by banking on
cutting-edge technology and the power of AI to integrate with the existing
platforms such as traditional ATSs like Workday, SuccessFactors, etc. to
solve some real pain points of the industry. However, for AI to work in
a complex industry like recruitment, we need to consider the human element
involved. Take for instance the words Skillate and Skillate.com — both these
words refer to the same company but will be treated as different words by a
machine. Moreover, every day new companies and institute names come up, and
thus it is almost impossible to keep the software’s vocabulary updated. To
illustrate further, consider the following two statements: 'Currently
working as a Data Scientist at <Amazon>’ and, ‘Worked on a project for
the client Amazon.’ In the first statement, “Amazon” will be tagged as a
company as the statement is about working in the organization. But in the
latter “Amazon” should be considered as a normal word and not as a company.
Hence the same word can have different meanings based on its usage.
How to Build Cyber Resilience in a Dangerous Atmosphere
The first step to achieving cyber resilience is to start with a fundamental
paradigm shift: Expect to be breached, and expect it to happen sooner than
later. You are not "too small to be of interest," what you do is not
"irrelevant for an attacker," it doesn't matter that there is a "bigger fish
in the pond to go after." Your business is interconnected to all the others;
it will happen to you. Embrace the shift. Step away from a
one-size-fits-all cybersecurity approach. Ask yourself: What parts of the
business and which processes are generating substantial value? Which must
continue working, even when suffering an attack, to stay in business? Make
plans to provide adequate protection — but also for how to stay operational
if the digital assets in your critical processes become unavailable. Know
your most important assets, and share this information among stakeholders.
If your security admin discovers a vulnerability on a server with IP address
172.32.100.100 but doesn't know the value of that asset within your business
processes, how can IT security properly communicate the threat? Would a
department head fully understand the implications of a remote code execution
(RCE) attack on that system?
A New Product Aims To Disrupt Free Credit Scores With Blockchain Technology
The foundation of Zoracles Protocol that differentiates the project from
other decentralized finance projects is its use of cutting-edge privacy
technologies centered around zero-knowledge proofs. Those familiar with
these privacy-preserving techniques were most likely introduced to these
concepts by the team at Electric Coin Company who are responsible for the
zero-knowledge proofs developed for the privacy cryptocurrency Zcash.
Zoracles will build Zk-Snarks that are activated when pulling consumer
credit scores yet hiding their values as they are brought onto the
blockchain. This is accomplished with a verification proof derived from the
ZoKrates toolbox. Keeping the data confidential is critical to ensure
confidence from users to have their data available on-chain. It can be
compared to using https (SSL) to transmit credit card data that allowed
eCommerce to flourish.A very interesting long-term goal of Zora.cc is to
eventually use credit score verification to prove identity. The implications
are enormous for the usefulness of their protocol if it can become the
market leader in decentralized identity. The team is focused on building the
underlying API infrastructure as well as a front-end user experience. If
executed successfully, it is very similar to the product offering of Twilio.
The “Platform as a Service” could go well with Zoracles “Snarks as a
Service.” One should watch this project closely.
Refactoring is a Development Technique, Not a Project
One of the more puzzling misconceptions that I hear pertains to the topic of
refactoring. I consult on a lot of legacy rescue efforts that will need to
involve refactoring, and people in and around those efforts tend to think of
“refactor” as “massive cleanup effort.” I suspect this is one of those
conflations that happens subconsciously. If you actually asked some of these
folks whether “refactor” and “massive cleanup effort” were synonyms, they
would say no, but they never conceive of the terms in any other way during
their day to day activities. Let’s be clear. Here is the actual definition
of refactoring, per wikipedia. Code refactoring is the process of
restructuring existing computer code – changing the factoring – without
changing its external behavior. Significantly, this definition mentions
nothing about the scope of the effort. Refactoring is changing the code
without changing the application’s behavior. This means the following would
be examples of refactoring, provided they changed nothing about the way the
system interacted with external forces: Renaming variables in a single
method; Adding whitespace to a class for readability; Eliminating
dead code; Deleting code that has been commented out; and Breaking
a large method apart into a few smaller ones.
Automation nation: 9 robotics predictions for 2021
"Autonomous robots took on more expansive roles in stores and warehouses
during the pandemic," says Rowland, "which is expected to gain momentum in
2021. Data-collecting robots shared real-time inventory updates and accurate
product location data with mobile shopping apps, online order pickers and
curbside pickup services along with in-store shoppers and employees." That's
especially key in large retail environments, with hundreds of thousands of
items, where the ability to pinpoint products is a major productivity
booster. Walmart recently cut its contract with robotic shelf scanning
company Bossa Nova, but Rowland believes the future is bright for the
technology category. Heretofore, automation solutions have largely been
task-specific. That could be a thing of the past, according to Rowland.
"Autonomous robots can easily handle different duties, often referred to as
'payloads,' which are programmed to address varying requirements, including
but not limited to, inventory management, hazard detection, security checks,
surface disinfectants, etc. In the future, retailers will have increased
options for mixing/matching automated workflows to meet specific operational
needs." Remember running out of toilet paper? So do retailers and
manufacturers, and it was a major wake up call.
Data for development: Revisiting the non-personal data governance framework
The framework needs to be reimagined from multiple perspectives. From the ground up, people — individuals and communities — must control their data and it should not be just considered a resource to fuel “innovation.” More specifically, data sharing of any sort needs to be anchored in individual data protection and privacy. The purpose for data sharing must be clear from the outset, and data should only be collected to answer clear, pre-defined questions. Further, individuals must be able to consent dynamically to the collection/use of their data, and to grant and withdraw consent as needed. At the moment, the role of the individual is limited to consenting to anonymise their personal data, which is seen as a sufficient condition for subsequent data sharing without consent. Collectives have a significant role to play in negotiating better rights in the data economy. Bottom up instruments such as data cooperatives, unions, and trusts that allow individual users to pool their data rights must be actively encouraged. There is also a need to create provisions for collectives — employees, public transport users social media networks — to sign on to these instruments to enable collective bargaining on data rights.3 things you need to know as an experienced software engineer
When we are in a coding competition where the clock is ticking, all we care
about is efficiency. We will be using variable names such as a, b, c, or
index names such as j, k, l. Putting less attention to naming can save us a
lot of time, and we will probably throw the code right after the upload
passed all the test sets. These are called the “throw-away code”. These
codes are short and as the name suggests — they won’t be kept for too long.
In a real-life software engineering project, however, our code will likely
be reused and modified, and that person may be someone other than ourselves,
or ourselves but after 6 months of working on a different module. ...
Readability is so important that sometimes we even sacrifice efficiency for
it. We will probably choose the less readable but extremely efficient lines
of code when working on projects that aim to be optimized within several CPU
cycles and limited memory space, such as the control system running on a
microprocessor. However, in many of the real-life scenarios we care much
less about that millisecond difference on a modern computer. But writing
more readable code will cause much less trouble for our teammates.
Quote for the day:
"Leadership does not always wear the harness of compromise." -- Woodrow Wilson
No comments:
Post a Comment