Why is machine learning ‘hard’?
I was specifically referring to the multi-dimensional nature of learning leading to exponential growth. Several times while I’ve been writing, I’ve discovered snippets of information which I know I’ve read earlier but I didn’t understand the relevance then. You don’t only need to find information but to find it in the order you need it. Every time you learn anything new, you need to re-read everything you ever read, most of which you threw away.
Nearly at the end, and I’ll write up my view on it so far (spoiler: positive stuff), but I thought this thread is a good place to deliver an early Christmas present for @auxbuss:
Neural networks… in Redis: https://github.com/antirez/neural-redis/
What really hit home for me is in the penultimate paragraph:
Computation changes our idea of knowledge: instead of treating it as justified true belief, knowledge describes a local minimum in capturing regularities between observables.
There seems to be a problem earlier in the article: if state-change is a precursor to computation, how do pure functions with immutable state work? Are they pure maths or just magic? This is what happens when MIT moves from Lisp to Python A very thought provoking piece (I know I’m Wrong but the thought process is a fun trip.)
I already had the difference between calculation and computing on my to-do list. You’ve either just made it easier or harder. I need to go and let the cat out.
I’ve had this ‘on file’ for a while and it’s old http://www.cs.northwestern.edu/~ian/What%20is%20computation.pdf
I find it interesting that Babbage & Ada seem to have seen the difference between the computational Analytical and the calculational Difference Engines more clearly than we do, that Turing defined mathematically what computability is, but that we still seem unclear about the definition of computation.
This just came up on a friend’s feed
That’s a good paper. It cops out a bit by answering “What is computation?” by ultimately saying “It’s tricky to define”, but it covers some good ground that gives the reader a lot of info to think about the question.
Also state-change doesn’t necessarily equate to change of (a variable’s) state. A system is almost always changing state.
I’m internet famous at last!
My view is that time is an illusion. An emergent phenomenon. I base this on absolutely nothing more than what I know of what we humans know right now. Which is a tiny fraction of what a CERN physicist knows. Space, as we know it, is probably the same.
The idea of quantum time has been around a while. The limit is Planck Time, I believe.
So, your closing idea washes over me since there are no dimensions and there’s certainly no now. I suspect these are simply emergent properties of fields. I also suspect there are a few more fields than we already know about. I hope so.
A collection of fields jiggling about with their fluctuations and vacuum energy interacting with each other under an immutable set of rules. We call it The Universe.
Calculation or Computation?
There was a scientist on Radio 4 talking about a new theory that ‘gravity might be wrong’ so there’s no need for Dark Matter to exist to make the equations work
He said something like, “Physics will carry on, whether we understand it or not.” I feel the same way about Truth. It exists, even if we don’t know what it is, or if it is unknowable. Beliefs, testable hypotheses, theories, ‘Laws’ and scientific facts are us trying to edge closer, mostly in the dark, occasionally falling in bunkers.
I’m very very nearly done with the course and will write up my views on it when I am.
However, in the meantime this is awesome http://kevinhughes.ca/blog/tensor-kart - and even more so given that everything significant and techy here makes sense and is familiar, thanks to the course
Is this related to the “discrete tensor networks” I mentioned in the blog post, above? Thanks for the link. It looks relevant to several of my interests. I was hoping to invent a multi-dimensional network thing once I was better at Clojure.
I don’t actually understand the question I’m asking yet. I deliberately didn’t find out until I’d written the blog entry. A quick search gave me “neural tensor networks” and weird quantum entanglement shizzle which I can imagine being a multi-dimensional network. When I failed to learn to Lisp in 1981, it was for a project to add fuzzy logic into a semantic network. They’ve taken their time, haven’t they? I’m going to blame the PC revolution, as usual, for plunging us back into the dark ages.
TensorFlow looks awesome. If it had some modern language bindings, it’s be great
MOND (or Modified Newtonian Dynamics) has been around for a long time. The problem is making the equations work. Newtonian physics has proved to be extremely accurate for every space mission we’ve deployed to date. So, it certainly scales to Solar System level.
I suspect the reason this is being discussed is because of the recent death of Vera Rubin; a personal heroine of mine. Her book, Bright Galaxies, Dark Matters, is wonderful for anyone remotely interested in dark matter. It’s one of the very few paper books I retain. I just love owning it and reading bits of it from time to time.
Vera’s hope was that it is Newtonian physics that is wrong, and that dark matter, as a sub-nuclear particle, is not the cause of the galactic gravitational anomaly. I’m with Vera on this.
It’s disgraceful that she wasn’t awarded a Nobel for her work, and I’m not the only one to think so: http://www.nytimes.com/2017/01/04/opinion/why-vera-rubin-deserved-a-nobel.html
Edit: So the non-relativistic component of MOND bothered me. I found this: https://arxiv.org/abs/1112.3960
Joscha Bach is @plinz on the twitters.
This is great. I’m only half way through. It mentions the state-vector of the universe. JSD talks about them for a single process and I think it contains a ‘text pointer’. The universe must have a big book, full of them.
I went to a talk by Jackson on how to use JSD with OO. He seems to have had the same ideas.