<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Naomi Eterman, Author at The McGill Daily</title>
	<atom:link href="https://www.mcgilldaily.com/author/naomi-eterman/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.mcgilldaily.com/author/naomi-eterman/</link>
	<description>Montreal I Love since 1911</description>
	<lastBuildDate>Wed, 22 Jan 2014 04:57:57 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
	<item>
		<title>The rise of the brain-bots</title>
		<link>https://www.mcgilldaily.com/2014/01/the-rise-of-the-brain-bots/</link>
		
		<dc:creator><![CDATA[Naomi Eterman]]></dc:creator>
		<pubDate>Mon, 20 Jan 2014 11:00:44 +0000</pubDate>
				<category><![CDATA[FrontPage]]></category>
		<category><![CDATA[inside]]></category>
		<category><![CDATA[MainFeatured]]></category>
		<category><![CDATA[Sci + Tech]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[biomimetics]]></category>
		<category><![CDATA[brain]]></category>
		<category><![CDATA[Chris Eliasmith]]></category>
		<category><![CDATA[Christopher Pack]]></category>
		<category><![CDATA[cognitive computing]]></category>
		<category><![CDATA[computer]]></category>
		<category><![CDATA[IBM]]></category>
		<category><![CDATA[mcgill]]></category>
		<category><![CDATA[McGill Daily]]></category>
		<category><![CDATA[MNI]]></category>
		<category><![CDATA[neuroengineering]]></category>
		<category><![CDATA[neuroscience]]></category>
		<category><![CDATA[neurotech]]></category>
		<category><![CDATA[singularity]]></category>
		<category><![CDATA[Synapse]]></category>
		<category><![CDATA[visual cortex]]></category>
		<category><![CDATA[Watson]]></category>
		<guid isPermaLink="false">http://www.mcgilldaily.com/?p=34824</guid>

					<description><![CDATA[<p>How neuroscience is changing technology </p>
<p>The post <a href="https://www.mcgilldaily.com/2014/01/the-rise-of-the-brain-bots/">The rise of the brain-bots</a> appeared first on <a href="https://www.mcgilldaily.com">The McGill Daily</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Two and a half million years ago, the <em>Homo</em> genus of primates emerged by virtue of a rapidly growing brain, the largest in relation to body size of any mammal. The human brain has become the defining feature of our species, and recent advances in brain research have inspired neuroscientists and programmers alike to turn information about this mysterious and complex organ into biomimetic (‘life-imitating’) technologies.</p>
<p>One such technology was introduced in 2012, by scientists at the University of Waterloo. <a href="https://uwaterloo.ca/news/news/waterloo-researchers-create-worlds-largest-functioning-model" target="_blank">Spaun</a>, short for Semantic Pointer Architecture Unified Network, is the largest computer simulation of a functioning brain to date. It is the brainchild of Chris Eliasmith, a professor in philosophy and systems design engineering at the University of Waterloo, who developed the system as a proof-of-principle supplement to his recent book: <em>How to Build a Brain</em>.</p>
<p>The model is composed of 2.5 million simulated neurons and four different neurotransmitters that allow it to ‘think’ using the same kind of neural connections as the mammalian brain. Instead of code, Spaun receives visual inputs in the form of numbers and symbols, which it responds to by performing simple tasks with a simulated robotic arm. Tasks are similar to basic IQ test questions, which include pattern recognition and retracing visual input from memory.</p>
<blockquote><p>“Models like Spaun are not expressed using standard computational structures. In order to run on today’s computers, we have to translate the model into code; but it is more natural and efficient to run on specialized hardware that is structured more like a brain.”</p></blockquote>
<p>“There are no connections in the model that aren’t in the brain,” explains Eliasmith. “Models like Spaun are not expressed using standard computational structures. In order to run on today’s computers, we have to translate the model into code; but it is more natural and efficient to run on specialized hardware that is structured more like a brain.”</p>
<p>Models like Spaun are unique from other forms of artificial intelligence (AI) because they are committed to solving problems in the same way as humans. Cognitive computer systems are at the other end of the spectrum and are capable of ‘machine learning,’ which allows them to analyze and recall patterns and trends from a large amount of data. These systems are undoubtedly clever, but their problem-solving strategies are incomparable to humans’.</p>
<p>For instance, IBM’s research team started developing a<a href="http://www-03.ibm.com/innovation/ca/en/watson/science-behind_watson.shtml" target="_blank"> new cognitive computer in 2006</a>, a namesake of IBM’s former CEO, Thomas J. Watson, which became the first of its kind to replicate the language and analytical ability of humans. Watson made headlines around the world after it beat long-time Jeopardy champions Ken Jennings and Brad Rutter on the popular gameshow in 2011. However, the data-containing servers stored by <a href="http://www.nytimes.com/2010/06/20/magazine/20Computer-t.html?pagewanted=8&amp;_r=1" target="_blank">Watson</a> filled up an entire room above the set, and it was running over 100 algorithms per clue in order to find the most probable answer.</p>
<p>It has taken two years for the IBM team to shrink Watson from its original 16-terabyte mammoth state to the size of a pizza box, all while increasing its processing speed by 240 per cent; however, it is still a far cry from the efficiency of a human brain, which consumes less power than a light bulb and weighs an average of three pounds.</p>
<p>The advent of neuromorphic devices has brought us closer to this ideal. IBM’s SyNAPSE (Systems of Neuromorphic Adaptive Plastic Scalable Electronics) project has recently developed prototypes of a silicon neurosynaptic chip that mimic the brain’s natural processing capacity using artificial ‘neurons’ connected to one another. The building block of these chips is a ‘corelet’ that represents a network of synapses for a specific function, which can be combined and programmed for more complex applications. The project’s ultimate goal is a human-scale cognitive computing system, on the order of 10 billion neurons and 100 trillion synapses that would occupy the same volume as a brain.</p>
<blockquote><p>“There is still a long way to go in terms of developing the surgical approach, the technology that interfaces with the brain, and the algorithms that permit the camera to communicate with the brain.”</p></blockquote>
<p>These brain-like technologies could not be possible without the recent surge in discoveries about brain function at genetic, molecular, and behavioural levels. In part, these can be attributed to advances in genetics, stem cell biology, and imaging.</p>
<p>Several innovative tools have made an impact on research in brain circuitry, including genetically-engineered viruses that can trace infected neuron pathways to determine their connections in brain tissue, the<a href="http://www.ncbi.nlm.nih.gov/pubmed/17972876 " target="_blank"> Brainbow</a> technique (a genetic method to fluorescently label individual neurons) developed at Harvard Medical School in 2007, and the emergent field of optogenetics, which is used to artificially stimulate nerve cell activity. It can take up to twenty years for new research to be implemented into technology.</p>
<p>The creation of brain-machine interfaces is an excellent example this technology. Neuroengineering has already made such interfaces a reality, as with cochlear implants and deep brain stimulation for Parkinson’s disease, based on research into auditory function and disease pathology. One laboratory at the Montreal Neurological Institute is working on laying the scientific groundwork for using visual brain-machine interfaces to treat blindness. Christopher Pack, a professor in visual neurophysiology at McGill, studies the function of visual cortical circuits in the brain at a mathematical level. For the visual cortex, Pack envisions a small camera connected directly to the brain as a solution to retinal degeneration.</p>
<p>“There is still a long way to go in terms of developing the surgical approach, the technology that interfaces with the brain, and the algorithms that permit the camera to communicate with the brain,” Pack told The Daily. “Previous work has succeeded in allowing blind subjects to detect spots of light and perhaps crude shapes, but the longer term goal would be to restore the perception of detailed vision – things like faces, letters, motion, et cetera.”</p>
<p>Pack’s research is still in the early stages of translation into the tech sphere, but it is a clear indicator of the future of neuroscientific progress. We are hurtling toward the time when computers may surpass their creators in intelligence. The great mathematician John von Neumann defined this as ‘singularity.’ Shortly after Neumann’s death, an unfinished manuscript entitled “The Computer and the Brain” was published, outlining his thoughts on the computer as a brain-like processor. Even in the 1950s, the parallels were evident – and the conclusion startling. With the rate of technology accelerating the way it has in the last decade, it may well be time to start redefining what it means to be human.</p>
<p>The post <a href="https://www.mcgilldaily.com/2014/01/the-rise-of-the-brain-bots/">The rise of the brain-bots</a> appeared first on <a href="https://www.mcgilldaily.com">The McGill Daily</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Pulsars: the cosmic lighthouses</title>
		<link>https://www.mcgilldaily.com/2013/11/pulsars-the-cosmic-lighthouses/</link>
		
		<dc:creator><![CDATA[Naomi Eterman]]></dc:creator>
		<pubDate>Mon, 11 Nov 2013 11:00:33 +0000</pubDate>
				<category><![CDATA[Sci + Tech]]></category>
		<category><![CDATA[Sections]]></category>
		<category><![CDATA[astrophysics]]></category>
		<category><![CDATA[atomic clock]]></category>
		<category><![CDATA[gravitational ripples]]></category>
		<category><![CDATA[mcgill]]></category>
		<category><![CDATA[McGill Daily]]></category>
		<category><![CDATA[neutron stars]]></category>
		<category><![CDATA[physics]]></category>
		<category><![CDATA[pulsar]]></category>
		<category><![CDATA[relativity]]></category>
		<category><![CDATA[scitech]]></category>
		<category><![CDATA[space-time continuum]]></category>
		<guid isPermaLink="false">http://www.mcgilldaily.com/?p=33900</guid>

					<description><![CDATA[<p>Discovering our galactic backyard</p>
<p>The post <a href="https://www.mcgilldaily.com/2013/11/pulsars-the-cosmic-lighthouses/">Pulsars: the cosmic lighthouses</a> appeared first on <a href="https://www.mcgilldaily.com">The McGill Daily</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p><em>Correction appended on November 16</em></p>
<p>Carl Sagan wrote in his best-selling science book Cosmos that “the surface of the Earth is the shore of the cosmic ocean.” Generations of emboldened astrophysicists have expanded our knowledge of the universe up to this decade. Today, we can follow the Curiosity Rover as it cruises over Mars, and the Voyager 1 spacecraft, launched in 1977, after it finally penetrated the interstellar space beyond our solar system in September. The Kaspi lab at McGill focuses their gaze on the pulsars that sprinkle our own Milky Way like lighthouses on the cosmic shore.</p>
<p>Vicky Kaspi, a Canada Research Chair in Observational Astrophysics and the head of the McGill Pulsar Group, has spent over 20 years examining one such phenomenon in our universe. Neutron stars are the remnants of stars four to eight times bigger than our sun. These stars end their lives with a bang – blowing out their outer layers in a spectacular supernova explosion. The gravitational pull from the explosion causes the centre of the star to collapse, creating a neutron star. Pulsars, a subset of neutron stars, span 20 kilometres across, and spin hundreds of times each second, emitting lighthouse-like beams of radio waves from their poles. A teaspoon of these celestial objects weighs several thousand tons, and occasionally, unexpectedly, they explode – “particularly when we go on vacation,” Kaspi jokes.</p>
<p>Kaspi’s lab uses orbiting NASA satellites and large, ground based telescopes in Arecibo, Puerto Rico and West Virginia to capture X-ray and radio waves to study the pulsar emissions. The light from their rotations is detected from Earth at periodic intervals. These ‘pulses’ come so regularly that their accuracy is comparable to the highest atomic clock standards.</p>
<blockquote><p>Pulsars act like ships sailing through the cosmic ocean producing gravitational ripples in the fabric of space and time.</p></blockquote>
<p>Pulsars act like ships sailing through the cosmic ocean producing gravitational ripples in the fabric of space and time. Albert Einstein’s theory of special relativity suggests that space and time exist on a continuum, with time as the fourth dimension. In this space-time continuum, faster moving objects will experience time at a slower rate. Large masses will cause this space-time fabric to warp and this warpage – known to us as gravity – shapes the orbits of celestial bodies. In rare instances, two pulsars may exist in a binary orbit around each other, creating perfect conditions for the examination of Einstein’s theory of relativity.</p>
<p>Robert Archibald, a graduate student in the Pulsar Group, studies a highly magnetized kind of pulsar called a magnetar. Magnetars, at only a few thousand years old, are some of the youngest stars in the galaxy, and are among the most magnetic objects in the known universe. If the moon were a magnetar, it would wipe out all the electronics on our planet. Archibald first heard of Kaspi’s work in 2008, when Kaspi appeared on an episode of CBC Radio One’s science program “Quirks and Quarks.” He recently published the first firm piece of evidence of a decrease in the rotation period of a magnetar, an important discovery in the study of these objects. “These are conditions we can’t create in a laboratory on Earth,” he says. “They are the only places where you have matter behaving at these extreme conditions.”</p>
<p>A major concern for radio astronomers is the increasing saturation of Earth’s atmosphere with radio signals. “Radio quiet zones” such as the one surrounding the Green Bank Telescope in West Virginia, are increasingly hard to come by in the age of laptops and iPhones. Still, scientists are harnessing new technologies to improve detection methods beyond anything we have ever imagined; for example, extrasolar planets, or ‘exoplanets,’ are emerging from cosmic obscurity to teach us more about planets that reside outside of our galaxy, and give us renewed hope in the search for extraterrestrial life. For Kaspi, however, our galaxy is fascinating enough – as she says, “I’m busy in my backyard.”</p>
<p><i>The article previously stated that the pulsars were found within our solar system. In fact, pulsars only exist outside our solar system. The Daily regrets the error.</i></p>
<p>The post <a href="https://www.mcgilldaily.com/2013/11/pulsars-the-cosmic-lighthouses/">Pulsars: the cosmic lighthouses</a> appeared first on <a href="https://www.mcgilldaily.com">The McGill Daily</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Doing the web crawl</title>
		<link>https://www.mcgilldaily.com/2013/09/crawling-through-the-web/</link>
		
		<dc:creator><![CDATA[Naomi Eterman]]></dc:creator>
		<pubDate>Mon, 30 Sep 2013 10:00:54 +0000</pubDate>
				<category><![CDATA[Sci + Tech]]></category>
		<category><![CDATA[Sections]]></category>
		<category><![CDATA[data]]></category>
		<category><![CDATA[internet]]></category>
		<category><![CDATA[McGill Daily]]></category>
		<category><![CDATA[McGill University]]></category>
		<category><![CDATA[Montreal Girl Geeks]]></category>
		<category><![CDATA[open data]]></category>
		<category><![CDATA[technology]]></category>
		<category><![CDATA[web crawl]]></category>
		<guid isPermaLink="false">http://www.mcgilldaily.com/?p=32692</guid>

					<description><![CDATA[<p>The ins and outs of open data</p>
<p>The post <a href="https://www.mcgilldaily.com/2013/09/crawling-through-the-web/">Doing the web crawl</a> appeared first on <a href="https://www.mcgilldaily.com">The McGill Daily</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>These days, the answer to a question is just a few clicks away. Search engines like Google make this possible by discovering, indexing, and ranking websites using algorithm-driven virtual spiders. Without these ‘web crawlers,’ navigating through the billions of websites that comprise the World Wide Web would be a daunting task.</p>
<p>Still, for overarching questions about the trends and connections described by web crawl data itself, individuals would need access to data storage and power that until recently was available only to Google. Lisa Green, the director of the non-profit open data initiative <a href="http://commoncrawl.org/">Common Crawl</a>, spoke at the RPM Startup Centre in Griffintown last week about how her organization is simplifying the process of data analysis for all kinds of ‘curious coders.’</p>
<p>The talk, organized by <a href="http://montrealgirlgeeks.com/">Montreal Girl Geeks</a>, focused on the philosophy of open data and its utility to small-scale researchers, educators and entrepreneurs.</p>
<p>Gil Elbaz, Silicon Valley database engineer and the co-creator of Google-acquired software Applied Semantics, founded Common Crawl in 2008 with the mission of democratizing access to the web. According to the organization’s website, Common Crawl “produc[es] and maintain[s] an open repository of web crawl data that is universally accessible.” The corpus covers approximately 300 terabytes of data corresponding to 8 billion web pages to date, all stored on Amazon S3 cloud storage service. Also, in keeping with the objective of a freer web, the entire crawl algorithm is published and publicly available on GitHub, a repository for coders to publish, store, and share code.</p>
<p>The Common Crawl Foundation has facilitated many success stories. In 2012, Matthew Berk of Zyxt Labs, Inc. tested around 1.3 billion URLs from crawled web data. After discovering that almost a fifth of the websites contained references to Facebook URLs, he founded a new social media start-up called Lucky Oyster that allows users to make recommendations to friends based on information from networking websites.</p>
<p>In the same year, Common Crawl hosted a code contest that showcased the breadth of crawl-data applications in different fields. <a href="http://www.data-publica.com/">Data Publica</a>, a Paris-based open data directory, mapped the key players in the world of French open data and their connections to each other in the virtual sphere. Another group mapped the probable definition of a word based on its appearance in Wikipedia entries. The possibilities are truly staggering.</p>
<p>Green acknowledges the appeal of Common Crawl to business and startups, but is more inspired by the social implications of an openly accessible data repository. Individuals can now seek data-based, computational solutions for the greater good. Next month’s <a href="http://www.ecohackmtl.org/">écoHACK Montréal</a>, for example, partners experts in urban sustainability with tech-savvy coders to collaborate on sustainability projects in the city. Easier access to knowledge will also provide useful tools “for the two guys in the basement with a good idea,” Green added.</p>
<p>Opening up databases can even precipitate unexpected windfalls for taxpayers. When the National Health Service (NHS) in the UK opened prescriptions data up to the public last year, certain interested third parties discovered that an average of £27 million per month was spent by doctors prescribing proprietary (i.e. patented) cholesterol-lowering statins to patients, when generically available drugs were equally effective. A switch to cheaper drugs would save the NHS £200 million a year.</p>
<p>Changing the status quo would also make open data an appealing alternative to the fastidiously guarded copyrights of the printing age. Creative Commons, where Green was formerly chief of staff, is a non-profit organization that offers copyright licenses for creative and academic material. It has reshaped the possibilities of copyright protection on the internet for large-scale collaborative organizations like Wikipedia and independent artists alike. Admittedly, there is at present a significant lack of case law regarding data to render a Creative Commons approach to open data feasible.</p>
<p>The ‘open’ movement extends well beyond data and into the realm of open education, global access licensing for medicines, and open access to research. The movement has also gained traction at McGill with clubs such as Universities Allied for Essential Medicines advocating for the University’s adoption of global access policies, which would ensure generic production of all McGill-affiliated medical innovations.</p>
<p>As the information available on the internet rapidly expands, open data is becoming an increasingly important tool for the computer-literate generation. Leann Brown, the organizer of the open data event, is passionate about spreading the ‘open’ message to people in the technological world: “That’s what Montreal Girl Geeks is about – encouraging you to teach and enable yourself and share that knowledge in the community.”</p>
<p>The post <a href="https://www.mcgilldaily.com/2013/09/crawling-through-the-web/">Doing the web crawl</a> appeared first on <a href="https://www.mcgilldaily.com">The McGill Daily</a>.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
