house price going up

Discussion – How technology affects housing prices?

Considered to be the largest commodity in the world, real estate has quite a notable exception with an estimated $217 trillion valuation.

Technology, with the emergence of the Internet had greatly impact the whole industry benefiting buyers, sellers, and agents. From the good old day’s basic cold calls, print ads, emails, open house visit, etc., we are now moving to a wide variety of social media sites, virtual apps, IoT devices, chatbots, cloud computing, advanced analytics, and blockchain. Thus, there’s huge room for sales, purchases, and prospects.

Not just that, technology has made it easier to get data on quality via user ratings on certain websites. People doesn’t only look for accessibility, pleasant climates, booming economies, and appealing amenities, but also the quality of nearby amenities.

recent study in the Journal of Urban Economics that uses data from Washington D.C. finds that restaurant quality, as measured by Yelp reviews, impacts property values: A doubling of the number of highly rated restaurants (rating > 3.4) within a mile radius of a home is associated with an 11.5% increase in the home’s value.

Information accessibility through technology thus, opens a wide market landscape with specific variables, thus, affecting housing prices.

As the aforementioned study has shown, housing prices increase in areas with the best amenities as the quality of those amenities becomes well known.

Tl; dr;

Technology plays a major role in making it easier for everyone to share and gather digital information in acquiring homes.It helps buyers getting more-realistic view of the property plus, the quality of amenities available on that certain area. The higher the housing demand on that area, the higher the housing value of the homes located nearby.

The housing industry does affects customer decision at the same time, the market value of homes.

The Ever-changing Housing Industry with Technology

smart home

Source: iot.eetimes.com

Traditionally based on three assets, namely land, building and money, the housing sector under the real estate industry has now digital information employed to technologically assess in terms of demographics, government policies, interest rates on certain locations through valuable apps and websites, enabling customers in need of buying or renting homes to deeply monitor housing rates. This is very much useful for the younger generation who will interested in home ownership.

In 2018, CoreLogic together with RTi Research of Norwalk, Conn., conducted an extensive consumer housing sentiment study, combining consumer and property insights. In their findings, potential buyers in the younger millennial demographic have the desire to buy, 40% are extremely or very interested in home ownership.

In fact, 64% say they regularly monitor home values in their local market. However, while, 80% of younger millennials plan to move in the next 4 or 5 years, 73% cite affordability as a barrier to home ownership (far higher than any other age cohort).

“Our consumer research indicates younger millennials want to purchase homes but the majority of them consider affordability a key obstacle,” said Frank Martell, president and CEO of CoreLogic. “Less than half of younger millennials who are currently renting feel confident they will qualify for a mortgage, especially in such a competitive environment.”

Rentals market

In talking about home value, online platforms such as Airbnb claimed that they bring more money to cities in both rental fees and the money that renters spend during their stays.The company also notes that roughly three-quarters of its listings aren’t in traditional tourist neighbourhoods, which means that money is going to communities typically ignored by the hospitality industry. Knowing that the company offers over 5 million properties, in over 85,000 cities across the world, and its market valuation exceeds $30 billion.

There is no further evidence on how they came to that conclusion.

Then on, a working paper has been published centering about the effects of home sharing on house prices and rents, collecting data from 3 sources:

  • consumer-facing information, from Airbnb, about the complete set of Airbnb properties in the U.S. (there are more than 1 million) and the hosts who offer them;
  • zip code–level information, from Zillow, about rental rates and housing prices in the U.S. real estate market; and
  • zip code–level data from the American Community Survey, an ongoing survey by the U.S. Census Bureau, including median household incomes, populations, employment rates, and education levels. We combined these different sources of information in order to study the impact of Airbnb on the housing market.

Investigation results show that in aggregate, the growth in home-sharing through Airbnb contributes to about 1/5 of the average annual increase in U.S. rents and about 1/7 of the average annual increase in U.S. housing prices. By contrast, annual zip code demographic changes and general city trends contribute about 3/4 of the total rent growth and about 3/4 of the total housing price growth.

Summary

Technology impact in the housing industry has underlying economics.

Evidences in the aforementioned study about Airbnb show that the platform affects the housing market through the reallocation of housing stock. And by looking at housing vacancies, Airbnb supplies two things: it positively correlates with the share of homes that are vacant for seasonal or recreational use and negatively correlates with the share of homes in the market for long-term rentals.

By Tuan Nguyen

React logo

Technology review – React 16.11

React 16.11 is already here after a month since the release of React 16.10. Let us see what they’ve got.

This version is not even listed in the release list just yet, maybe I’m just early.

Tl; dr;

Not much is happening, there are only 2 updates from the look of it. You can upgrade your code base if you don’t use any of the following experimental features: unstable_createRoot and unstable_createSyncRoot.

Bug fix

A single bug fix is included in React 16.11.

Fix mouseenter handlers from firing twice inside nested React containers.

This could troll developers quite a bit if they encounter this issue. And they will end up writing some hacks to get around it. I am grateful that it is fixed before I even get to experience the headache.

Removal of experimental API in stable build

This does not affect day-to-day development, since unstable_createRoot and unstable_createSyncRoot are just experimental APIs. However, by separating it from the stable build, it helps reducing the size of React 16.11 gzipped by 0.1%.

react size

Source: React github

You can still use these functions in the Experimental channel.

Facebook decided to release a minor version (React 16.11) because of the removal of a feature. Experimental or not, they are an important part of the library, and removing them deserves a big update.

Summary

Yet another small blog post. But we’re getting closer and closer to the Concurrent feature, as seen in React Conference 2019.

React logo

Technology review – React 16.10

Minor upgrade of React has come with version 16.10. Let us see what they did comparing to the previous upgrade.

Tl; dr;

React 16.10 comes with a bunch of bug fixes with no new feature added in.

Everything is backward compatible so you can upgrade your code base to React 16.10 with ease.

Bug fixes

React DOM

  • Fix edge case where a hook update wasn’t being memoized.
  • Fix heuristic for determining when to hydrate, so we don’t incorrectly hydrate during an update.
  • Clear additional fiber fields during unmount to save memory.
  • Fix bug with required text fields in Firefox.
  • Prefer Object.is instead of inline polyfill, when available.
  • Fix bug when mixing Suspense and error handling.

Most of these bug fixes are very specific, personally I haven’t come across any of these issues. But it is nice to see them fixed. One thing to notice is that there are a lot more issues currently open in the library issue board, and we will have more shipped in the future versions.

Scheduler (Experimental)

  • Improve queue performance by switching its internal data structure to a min binary heap.
  • Use postMessage loop with short intervals instead of attempting to align to frame boundaries with requestAnimationFrame.

The Scheduler is what React uses to optimize the performance of re-rendering the DOM. React team has spent a lot of time optimizing the internal feature to give us a nice performance that we need.

useSubscription

  • Avoid tearing issue when a mutation happens and the previous update is still in progress.

People have been using hooks for quite a while now, and they are useful in many circumstances. We would like to have a stable feature that we can depend on, and fixing bugs is getting us there.

Summary

Well this is a small blog post, but the impact of these bug fixes in React 16.10 are huge for the community. It proves that Facebook is still active in React development and it will not go away any time soon.

By Tuan Nguyen

32 bit vs 64 bit

Technology review – 32-bit vs 64-bit OS

An operating system, also known as “OS”, keeps everything together, managing all the hardware and software on your computer.

It communicates with your device’s hardware: from your keyboard, mouse, storage devices, display, and Wi-Fi radio. Plus, OS also has a lot of software like common system services, libraries, and application programming interfaces (APIs). You can directly interact with the operating system through a user interface such as a command line or a graphical user interface (GUI).

When we talk about operating systems, 32-bit and 64-bit operating systems are the most popular choices in the tech market.

The “bit” term there, also known as “binary digit”, denotes about the smallest piece of data in a computer. We already know that a computer only understand binary language (by 0s and 1s), and every bit can have just one binary value, either 0 or 1. A computer stores data in a collection of such bits known as a byte. 8 bits make up a byte, also called an octet.

In computer architecture, 32-bit is capable of transferring 32 bits of data whenever it performs an operation while 64-bit allows computers to process data and memory address by 64 bits.

Generally, the more data that can be processed at once, the faster the system can operate.

Tl; dr;

Most computers made in the 1990s up to early 2000s were 32-bit machines supporting 32-bit OS, but as the need for faster and more efficient data-handling capabilities quickly becomes a necessity, then emerged the 64-bit architecture.

Today, most new processors are based on the 64-bit architecture and support 64-bit OS, they are also backward-compatible with 32-bit operating systems.

Only a few of the computers are still utilizing 32-bit architecture, commonly referred to as x86 systems (in reference to the first 32 bit 286\386\486 systems).

Few of the remaining 32-bit operating systems available on the market today are:

  • Microsoft Windows – include Windows 95, 98, NT, 2000, XP, Vista, and Server;
  • Linux – include Red Hat, Mandrake, and Ubuntu;
  • Solaris – versions 1-10;
  • Mac OS – Classic (84-2001) and OS X; and
  • FreeBSD – versions 1-8.

What are the differences between 32-bit and 64-bit operating systems?

how operating system works

Source: howstuffareworking

In the early 1990’s, majority of desktop and laptop computers work in 32-bit OS (via 32-bit processor). Most of them are based on Intel’s successful IA32 architecture also known as the x86 (286, 386, 486).

The ever increasing requirements came new hardware and software architectures that introduced the 64-bit OS.

64-bit architecture has been used by supercomputers since the 1970s. It is used in RISC (reduced instruction set computing) based workstations and server in the early 1990s.

A program running on a 32-bit OS can easily address 4 gigabytes of memory-recall, 232 is roughly 4.3 billion. While a program, running on a 64-bit OS can address 264, 4 billion times 4 billion bytes – a large number indeed.

When comparing these two OSes, 64-bit OS simply leaves a huge mark.

Compare to 32-bit OS, 64-bit OS increases program performance, offers security protection, and allows you to create 16TB of virtual memory and store 264 computational values. Offering dual-core, six-core, quad-core, and eight-core versions, its multiple cores increase your computer’s processing power, making it run faster. It also helps software programs to operate efficiently.

With 32-bit, the only edge is that it’s compatible with all older devices developed in early 2000 and late 1990. Vendors no longer develop application for 32-bit OS. Due to a lack of market demand, manufacturers do not often offer 32-Bit driver versions for their hardware.

Summary

The main difference between 32-bit and 64-bit operating systems is the way they manage memory. 32-bit is limited to a total of 4 GB maximum of system memory while 64-Bit has a limit of 16 Terabytes maximum of system memory allocation. This is vitally important for performance because data in memory is accessed thousands of times faster than from a disk drive. Also, programs load much faster into memory.

By Tuan Nguyen

keyboard on fire

Discussion – Can software damage hardware?

As the present day’s cyber technology innovation is improving, the threat landscape widens as well.

Cyber criminals are constantly finding ingenious ways to tunnel into businesses, schools, government offices, computer users, and IT systems, disrupting services and daily operations.

Several years ago, hackers and malware have shut down nuclear centrifuges in Iran and severely damaged an unnamed steel mill in Germany.

Knowing the advancements we have today, could such incidences be possible enough?

As headlined on an article from a fictional Weekly World News in 2000, reporting that “hackers can now turn your home computer into a bomb and blow your family to smithereens, and do so remotely from thousands of miles away,” it’s quite impossible to happen in reality.

newspaper featuring hackers

Source: WIRED

It’s a yes and a no, theoretically and hypothetically, in fact, such conditions like this are somewhat unrealistic in most machines today.

Considering your definition of damage, it might be possible in some EXTREMES:

  • Making your CPU usage a 100% (over its maximum temperature);
  • Turning your speakers volume to max into then enjoy a lot of your favourite playlists over a few weeks, non-stop;
  • Writing on the USB drive a few hundred million times, making it unusable then; or
  • Researching the drive geometry and issuing the worst possible commands with your HDD Drives. Thus, making it hot very soon and shortening its lifespan.

Obviously, you need a lot of time to damage your computer.

Tl; dr;

Upon answering the question about the software harming your hardware, it’s a battle between a YES and a NO.

As all hardware is driven by software, it is technically possible to destroy your hardware or at least make it inoperable by messing with software. Think about overclocking, wherein, you’re doing changing settings in the BIOS, allowing low access to low level settings. Mess up and voila! Your hardware is gone. Also, there are viruses that could potentially take advantage and would possibly disrupt and destroy your hardware since the system can be modified.

No, in the sense that, software is nothing but a flow of electricity that’s why it cannot damage hardware by itself.

Lastly, a software can damage another software in the hardware that makes it work, but this doesn’t essentially break the hardware.

Is it TRUE that software can damage your hardware? How would that even be possible?

hardware and software

Source: ITI SOLUTIONS

There’s quite a few instances wherein you can definitely say that software can “potentially” harm your hardware.

In theory, numerous incidences might be:

A BIOS Flash.Some motherboards allow you to flash (modify) the BIOS via software from within the OS. Eventually, this opens a backdoor for malware to flash the BIOS to something that will damage the processor.

Overclocking tools. Another one, some motherboards provide overclocking tools, allowing you to change CPU settings from within the OS. Same as the first situation, once a virus takes over that, putting your CPU’s settings to the extreme then BOOM!

Stress-tests and intensive applications. Over-exerting your CPU to its limits could spike temperatures, which can eventually damage it. Of course, the fan in your computers help cool the CPU down, and most CPUs are designed to shut off when they reach a dangerously high temperatures (thermal shutdown).

In such cases, if an attack is possible even in theory, CrowdStrike, a security vendor says YES.

“We can actually set the machine on fire,” said Dmitri Alperovitch, CrowdStrike CTO in a statement.

BUT…

The exploit requires completely replacing the Mac’s firmware, which controls every aspect of the hardware.

This means that you need to offer a fake Mac firmware update, convincing the user to install it.

Summary

A software can potentially damage a hardware unless it’s malicious but for legit software it’s too impossible to happen.

For the German steel mill incident, someone found a way to control the plant’s equipment, preventing people on site from shutting down a blast furnace.

As with Stuxnet, the one about nuclear centrifuges, it didn’t destroy computers – it exploited computers attached to centrifuges, and destroyed them.

It is unlikely to happen that viruses will destroy your computer, but hackers might someday go after the objects connected to it.

By Tuan Nguyen

microsoft vs google

Discussion – Google Spreadsheet vs Microsoft Excel

Spreadsheets are invaluable tools in creating budgets, producing graphs and charts and for storing and sorting data. They can vary in complexity and can be used for various reasons. May it be business data storage, accounting and calculation, budgeting and spending, assisting with data exports, and data sifting and cleanup, and accomplishing business administrative task.

Best examples of spreadsheet software are Google Sheets and Microsoft Excel. These two allow complex mathematical calculations and data analysis, has auto-save feature, and is also compatible with Android, iOS, Windows and Mac OS X.

Although many businesses rely on Microsoft Excel as their go-to application for spreadsheets, Google Sheets innovatively gives companies the option to figure out budgets, client contacts, and more.

Tl; dr;

Google Sheets and Microsoft Excel are two of the most popular spreadsheet platforms used by many small business owners and freelance enthusiasts anytime, anywhere.

They both have similar functionality, such as advanced conditional formatting, tracking dependencies, as well as existing robust graphs and picture creation options.

Let’s take the time to discuss their core differences.

Key Differences: Google Spreadsheet vs Microsoft Excel

excel spreadsheet

Source: Lifehack

Years back, Microsoft Excel has been a constant partner for businesses but as software migrates to cloud, Google’s innovative spreadsheet software emerged as a worthy opponent.

For now, care to think about these questions.

What are the main differences between the two? What key areas do each of these software has greater edge? What’s best to use?

Popularity.

Although more and more users are trying out Google Sheets as a spreadsheet tool, Microsoft Excel has an entrenched user base that’s comfortable with Excel. Thus, it still takes a considerable amount of time for user to eventually switch over to a new app and be familiar with it.

Collaboration.

Google Sheets makes work easier, integrating you into any workflow. It also gives you the ability to share with just about anyone while limiting their access and/or control.

On the other hand, Microsoft Excel allows sharing and collaboration but limited to what Google Sheet does. You’re restricted to sharing files via email, haven’t been getting the same level of collaboration that Sheets does.

But if you are using Office 365, you can get access to a similar tracked edit page and similar options for seeing activity from other users. But for now, Google Sheets prevails.

Functions and formulas.

Excel has far larger functions and formulas whilst, some Sheets’ formulas are still missing.

Offline access.

Microsoft Excel can still work perfectly offline and can automatically set up your files sync via OneDrive as soon as you regain internet access.

However, even if Google Sheets has offline access available, you’d still have the difficulty accessing files you previously created whilst online, pushing you to install an offline extension to be able to work on files extensively offline but extensions can misbehave at all times.

Handling larger budget files.

In terms of handling transaction records, tabs, and fancy calculations and graphs, Microsoft Excel can handle up to 17,179,869,184 cells.

Although Google Sheets might be fine and dandy for your budget, it can only handle up to 5,000,000 cells.

Personalization.

When using a lot of functions, Microsoft Excel has a handy Quick Access toolbar where you can pin buttons for relatively quick and easy workflow. Whereas, Google Sheets have no such feature.

Cloud and Syncing.

Google Sheets was built from the ground up to be a cloud based alternative to Microsoft Excel, making everything accessible from your Google account, and also see and access all your files from Google Drive.

With Excel on Office 2019 or earlier requires a bit of setting up, so there’s a need for you to use Office 365 to get the same level of instant synchronization between devices.

Cost.

Microsoft Excel requires one-time payment of Microsoft Office or a subscription to Office 365, while Google Sheets is completely free to use: no annual fee, no monthly fee, and no user free. Plus, if you have a Google account, you can already access to it directly.

Summary

Though Google Sheets and Microsoft Excel has pros and cons, both are still catching with each other in terms of technology, accessibility and compatibility making the gap between them getting smaller and smaller, slowly turning the two platforms to be quite similar.

By Tuan Nguyen

compact discs

Technology review – 3D Optical Data Storage

Optical data storage as what Britannica would define it is an “electronic storage medium that uses low-power laser beams to record and retrieve digital (binary) data.”

With optical storage technology, it uses lasers to write to, and read from, small discs that contain a light-sensitive layer upon which data can be stored.

optical data storage

Source: 123SEMINARSONLY

Optical storage systems usually consist of a drive unit and a storage medium in a rotating disc form. The discs, in general, are pre-formatted using grooves and lands (tracks), enabling the positioning of an optical pick-up and recording head to access the information on the disc.

Compare to other storage formats, optical data storage discs are small, portable, and do not easily wear out with continual use, making them particularly useful for storing data.

The first method for storing data using light on a hard medium was invented by James T. Russell in the late 1960s, after realizing that the wear and tear on the vinyl records caused by continuous contact from the stylus to the record, can be avoided by using a light to read a music without physically touching the disc. Then on, it had grown to a lot of developments.

Today’s conventional optical storage is currently in 2-dimensional medium,i.e. CD-ROM, DVD, Blu-Ray, etc. While these devices have steadily improved in storage capacity, they are still considered limited knowing the fact that data can be written in layers on the disc.

Now, researchers are trying to develop a new generation storage medium to potentially provide petabyte-level mass storage on DVD-sized discs known as the advent of 3D optical data storage.

Tl; dr;

3D optical data storage is an experimental storage technology predicted to offer exponentially more storage capacity than today’s data storage technologies.

This storage system may use a disc that looks nearly same like a transparent DVD, containing 100 layers of information to be stored on a single disc– each at a different depth in media and each consisting of a DVD-like spiral track.

The average thickness of this upcoming version of disc is predicted to be 5 millimeters. So expect to have thicker discs than today’s ordinary CDs and DVDs.

What is 3D optical data storage?

cross-section of a 3D optical disc (yellow) along a data track (orange marks).

A cross-section of a 3D optical disc (yellow) along a data track (orange marks).

Source: TheBendster (talk) (Uploads)

As Wikipedia would point it, “3D optical data storage is any form of optical data storage in which information can be recorded or read with three-dimensional resolution (as opposed to the two-dimensional resolution afforded, for example, by CD).”

The origins of this storage system dates back to the 1950’s, when Yehuda Hirshberg developed the photochromicspiropyrans and suggested their use in data storage.

In the 1970s, Valeri Barachevskii demonstrated that this photochromism could be produced by two-photon excitation and eventually in the late 1980s, Peter T. Rentzepis showed that this could lead to three-dimensional data storage.

3D optical data storage has the potential to provide petabyte-level (1024 TB) mass storage on DVD-sized discs, using a technology that allows more than 100 layers to be written on a disc that looks like a traditional DVD. Thus, creating an exponentially larger data.

Estimates suggest that 3D optical data storage discs can be able to store 5 terabytes of data or more.

Types of 3D storage

There are two (2) major types of 3D storage:

  • The simple storage of data throughout he volume of the disc; and
  • Holographic storage.

Localized-bit. An extension of standard disc storage, localized-bit allows data to be available not just on the surface of the disc but also, throughout its volume. The laser then reads the data through the medium, rather than across the lands and pits of a traditional CD-ROM or DVD.

Holographic Data Storage. Though the above method is viable, holographic data storage has a much greater storage and retrieval potential – much possible and has better opportunities for implementation.

It is a three-dimensional image created as light beams (e.g., from a laser) merge together. Usually, a laser beam is split into two paths: data and reference beams are directed into the storage medium such as a crystal.

Advantages and disadvantages of 3D optical data storage

Optical storage varies from other data storage techniques that make use of other technologies such as magnetism or semiconductors.

Upon using 3D optical data storage:

  • Optical media can last a long time depending on what kind of optical media you choose but only with proper care.
  • It is great for archiving. Meaning, when data is written to them, they cannot be reused – the data is permanently preserved having no possibility of being overwritten.
  • It is widely used on other platforms (PCs or any other system).
  • Optical media has the capability in pinpointing a particular piece of data stored on it, independent of the other data on the volume or the other in which that data was stored on the volume.

The downsides can be:

  • Due to system’s write-once-read-many (WORM) characteristic, it prevents you from being able to use that media again.
  • The server employs software compression to write compressed data to your optical media taking considerable processing unit sources. Thus, increasing the time needed in writing and restoring that data.

Summary

Although several companies are actively developing the technology and claiming that it may become available soon, still no commercial product based on 3D optical data storage has yet arrived due to design issues that need to be addressed.

By Tuan Nguyen

nano robot

Technology review – Femto computing

Now that nanotechnology is well-established, well-applied, and well-funded.In the near future, there will come a time that we will come across femtotechnology.

As what Wikipedia would state: Femtotechnology is a “hypothetical term used in reference to structuring of matter on the scale of a femtometer, which is 10^−15m. This is a smaller scale in comparison to nanotechnology and picotechnology which refer to 10^−9m and 10^−12m respectively.”

Tl; dr;

Femtotechnology is still in the theoretical zone, with no realistic application in the present.

Imagine a future wherein the soles of your shoes will be made up with femto-enhanced rubber making it more resilient to elements. Femto-sized probes and chemicals may one day course through your blood to protect your immune system against deadly viruses and diseases, or enable your smartphone to be as thin and flexible or even integrate it into your body.

These things will be some of the most notable innovations and inventions that the forthcoming femtotechnology could provide for the whole humanity, thanks to the power of femto computing.

Femto computing: Measuring in a subatomic scale

illustration of lights

Source:  Ecstadelic

An Australian AI researcher named Hugo de Garis, wrote a few years ago in Humanity Plus Magazine on the great power that the future technology could bring: “If ever a femtotech comes into being, it will be a trillion trillion times more “performant” than nanotech, for the following obvious reason. In terms of component density, a femtoteched block of nucleons or quarks would be a million cubed times denser than a nanoteched block. Since the femtoteched components are a million times closer to each other than the nanoteched components, signals between them, traveling at the speed of light, would arrive a million times faster. The total performance per second of a unit volume of femtoteched matter would thus be a million times a million times a million = a trillion trillion = 1024.”

Through femtotechnology comes femto computing. It is measuring on the scale of subatomic particles which is three orders magnitude smaller than picotechnology and six orders of magnitude smaller than nanotechnology.

Femto is derived from the Danish word femten meaning “fifteen”, as in, a femtometeror a “fermi” being 10-15 the size of a regular meter.

Ben Goertzel, one of the world’s leading AI researchers, highlighted in his companion article in Humanity Plus Magazine: “What a wonderful example we have here of the potential for an only slightly superhuman AI to blast way past humanity in science and engineering. The human race seems on the verge of understanding particle physics well enough to analyze possible routes to femtotech. If a slightly superhuman AI, with a talent for physics, were to make a few small breakthroughs in computational physics, then it might (for instance) figure out how to make stable femto structures at Earth gravity… resulting in femto computing – and the slightly superhuman AI would then have a computational infrastructure capable of supporting massively superhuman AI. Can you say “singularity”? Of course, femtotech may be totally unnecessary in order for a Vingean singularity to occur (in fact I strongly suspect so). But be that as it may, it’s interesting to think about just how much practical technological innovation might ensue from a relatively minor improvement in our understanding of fundamental physics.”

Being a computer science professor back then and having been able to map computer science concepts to QCD (quantum chromodynamics) phenomena, Hugo de Garis writes, when computing at the femto level, the essential ingredients of (digital) computing are bits and logic gates.

“A bit is a two-state system (e.g., voltage or no voltage, a closed or open switch, etc.) that can be switched from one state to another. It is usual to represent one of these states as “1” and the other as “0,” i.e., as binary digits. A logic gate is a device that can take bits as input and use their states (their 0 or 1 values) to calculate its output.

The three most famous gates, are the NOT gate, the OR gate, and the AND gate. The NOT gate switches a 1 to a 0, and a 0 to a 1. An OR gate outputs a 1 if one or more of its two inputs is a 1, else outputs a 0. An AND gate outputs a 1 only if the first AND second inputs are both 1, else outputs a 0.

There is a famous theorem in theoretical computer science, that says that the set of 3 logic gates {NOT, OR, AND} are “computationally universal,” i.e., using them, you can build any Boolean logic gate to detect any Boolean expression (e.g. (~X & Y) OR (W & Z)).”

So if he can find one to one mapping between these 3 logic gates and phenomena in QCD, he can compute anything in QCD. He would have femtometer-scale computation.

You can read the essay in full here.

Summary

In general, the practical applications of femtotechnology are currently considered to be unlikely. It would take a long way for us to realize its usage in real life.

But there are already sources denoting about the possibilities of femto science in the areas of biology through using ultrashort pulse laser technology. With it, biologists can peer into known biological reactions at the shortest time than ever.

For more new and exciting ideas about it, check out Femtochemistry and Femtobiology: Ultrafast Events in Molecular Science by Monique M. Martin and James T. Hynes.

By Tuan Nguyen

quantum computer

Technology review – New Google Quantum Computer

Over the years, the wonders of quantum computers have stirred the curiosity and imagination of the gifted minds in the tech community.

Since then, countless prototypes of these supercomputers have surfaced in the tech market with the hopes of a brighter, progressive, and fast-engaging future.

Last week, a draft research paper revealed that Google have built a quantum computer solving a problem that would take supercomputers 10,000 years.

The leaked paper was erroneously uploaded on the website of NASA’s Ames Research Center in Moffett Field, California before it was pulled down.

According to the Financial Times, where the story first broke out, some of the researchers there are paper authors.Readers have downloaded the manuscript before it vanished, and has already been circulating online including on reddit.

John Martinis, the physicist who leads Google’s quantum computing effort in Santa Barbara, California, remained tight lipped over the said leakage. Others in the field think it is legitimate.

You can read the paper in full right on this link.

 

Tl; dr;

While our classic computers like laptops, smartphones, and even modern supercomputers claimed to be extraordinarily powerful, the capabilities of these devices will soon to be in question due to an internet leakage.

Google, one of the giant tech companies investing in the quantum computing expedition, is said to have just achieved Quantum Supremacy. It is a milestone where a quantum computer is able to perform a calculation that classical computers can’t practically do.

The paper points only one author – John Martinis.

He is a physicist who leads Google’s quantum computing effort in Santa Barbara, California.

Will Google’s Quantum Supremacy milestone strike a significant implication?

illustration of a computer

Source: Popular Science

While IBM, Microsoft, Intel and as well as numerous others are eyeing at how to advance the quantum computing technology, a new story shrouded with intrigue has surfaced earlier this month

The search giant, Google is said to have achieved Quantum Supremacy through an internet leakage.

As stated on the paper, physicists at Google were said to have used a quantum computer to perform a calculation that would overwhelm the world’s best conventional supercomputer – Summit.

It contains details of a quantum processor called Sycamore that contains 54 superconducting quantum bits, or qubits claiming to have achieved quantum supremacy.

“This dramatic speedup relative to all known classical algorithms provides an experimental realization of quantum supremacy on a computational task and heralds the advent of a much-anticipated computing paradigm,” the paper says.

The paper calculates the task would have taken Summit, the world’s best supercomputer, 10,000 years – but Sycamore did it in just 3 minutes and 20 seconds.

“It’s a great scientific achievement,” says physicist Chad Rigetti, the founder and CEO of Rigetti Computing in Berkeley and Fremont, California, which is also developing its own quantum computers.

While, Greg Kuperberg, a mathematician at the University of California, Davis, calls the advancement, “a big step toward kicking away any plausible argument that making a quantum computer is impossible.”

Google’s quantum computer a task called a random circuit sampling problem. With such problem, after a series of calculations each qubit outputs a 1 or 0. The aim is to calculate the probability of each possible outcome occurring.

Based on the tech giant’s findings, Sycamore was able to find the answer in just a few minutes – a task estimatedto take 10,000 years on the most powerful supercomputer, Summit.

Google’s quantum computer, called Sycamore, consisted of only 54 qubits – one of which didn’t work. For quantum computers to really come into their own, they will probably need thousands.

Though that is impressive, there is no practical use for it.

“We shouldn’t get too carried away with this,” says Ciarán Gilligan-Lee at University College London. This is an important step in the era of quantum computing, but there’s still a long way to go, he says.

Summary

This feat isn’t particularly useful beyond producing truly random numbers – it was a proof of concept. It isn’t free of errors within the processor.

We still expect to see bigger and better processors to be built and used to do more useful calculations.

In a statement, Jim Clarke at Intel Labs said:

“Google’s recent update on the achievement of quantum supremacy is a notable mile marker as we continue to advance the potential of quantum computing,”

As with the leakage story, Google appears to have partnered with NASA to help test its quantum computer. In 2018, the two organizations made an agreement to do this, so the news isn’t entirely unexpected.

By Tuan Nguyen

machine-brain merge

Technology review – Neuralink

Imagine a brain machine that will be interfaced with the human brain, making anyone —– SUPERHUMAN.

Now, it will all be possible.

Thanks to the company Neuralink, which has been making headlines lately about its controversial brain-chip interface, finally revealing their top secret project via YouTube live stream last July 17, 2019.

The neuro-technology company aims to develop an implant device that will enable humans to control compatible technology with their thoughts. It will initially treat serious brain diseases and brain damage caused by stroke in the short-term.

With the said technology it had been working on, Neuralink also aims a long-term development for human enhancement, linking artificial intelligence (AI) with the power of the human mind.

Neuralink or the Neuralink Corp. is a tech startup founded by the futurist entrepreneur Elon Musk (also the CEO and founder of Tesla and Space X) in July 2016 to create ultra-high bandwidth brain-machine interfaces to connect humans and computers.” In 2017, the company said its initial goal was to devise brain interfaces to alleviate the symptoms of chronic medical conditions.

Tl; dr;

Neuralink is a neuro-technology company that aims to build implants that connect human brains with computer interfaces via artificial intelligence, making its initial use with paraplegics to perform simple tasks using phones or computers having no physical movement.

To be clear, the tech startup have not yet performed any human trials (hopefully by the end of 2020) but so far, it already started testing its prototypes on rodents and even “a monkey has been able to control the computer with his brain,” as told by its main founder Musk.

The company has been super secretive about the nature of its work since its founding in 2016.

Merging humans with AI by Neuralink implant

An artist’s visualization of how a Neuralink’s brain/computer interface would look.

An artist’s visualization of how a Neuralink’s brain/computer interface would look.

Source: NYT

Philosophers, sci-fi enthusiasts, and daydreamers have long been dreaming of turbocharging their brainpower or read someone else’s mind.

But… that’s way too far to be realized for now.

Neuralink, a company led by multi-entrepreneur Elon Musk have just been unravelling the first few sheets of the so-called ambitious sci-fi inspired implantable brain-computer interfaces (BCIs). The neuro-technology company’s device prototype name was derived from a science-fiction concept called Neural Lace that is part of the fictional universe in Scottish author Iain M. Banks‘ series of novels The Culture.

As reported by The Verge, the neuro-technology company’s first public project aims to implant some sort of device in people who have been paralyzed that will allow them to control their phones and computers in a way that resembles telepathy. It involves flexible “threads” (between 4 to 6μm) thinner than human hair and are less likely to cause neurological damage than the brain-machine implants currently on the market.

These very small threads are injected deep within a person’s brain to detect and record the activity of neurons. The information gathered by these tiny threads or wires, would then be passed to an external unit that could transmit to a computer, where the data can be used, forming a brain-machine interface.

The system can include “as many as 3,072 electrodes per array distributed across 96 threads,” according to a white paper credited to “Elon Musk & Neuralink.” The company is also developing a neurosurgical robot capable of inserting six threads into the human brain per minute.

neuralink implants

Source: slashgear.com

Procedure

Each thread, as smaller than one tenth of the size of a human hair, contains 192 electrodes.

Each electrode group is encased inside a small implantable device that contains custom wireless chips, measuring four by four millimetres.

These threads would be precisely inserted into the brain by a tiny needle at the end of the robot (measures around 24 microns in diameter), targeting only specific areas of the brain and avoiding damaging any blood vessels.

Summary

In the future, Neuralink will aim to conduct operations that are safe and painless as having laser eye surgery.

As Neuralink president Max Hodak told The New York Times, currently the operation would require drilling holes into the patient’s skull to implant the threads.

But in the future, they hope to use a laser beam to pierce the skull with a series of tiny holes, which would not be felt by the patient.

By Tuan Nguyen