I have not been engaged into anime lately and I sincerely don't have any motivations to do so at the moment. I am reading more manga/manhwa though. Yuri content is something has been helping me go through all of this.. life.
I much prefer my Steam profile because here there is a lack of emoticons, it just seems so empty. I know I can format my profile and make it look a little bit different, I simple don't want to. Sometimes you don't enjoy your job as your hobby, so, I prefer it plain text here.
The rest is pretty obvious. I like Yuri, Maths, Music and AI. I am currently researching on the Deep Learning area (about GANs specifically) and I am getting more and more in love with it daily. I don't think, at least now, that something as a completely sensient machine is possible. Imagine trying to teach emotions to a logical apparatus with the high risk of it misunderstanding sentiments between the long range of it. Although I believe we are reaching to a critic point between the power of AI and ethic and it should be more discussed before it becomes a catastrophe. Deep Learning can generate fake videos, fake audios and even fake pornographic videos, the limit should be the human own consciousness, but we know how it works.
The final question is why am I writing it? The same as why you are reading, we are bored.
I do have similar thoughts about aging, partly because my parents are quite old (they had me quite late). I also think about fitness, especially mental fitness. I also think about just running out of time! For me, my days are a resource I always want more of. I guess that tells good things about my mood, regarding what I spend my time on. I think your fear is not so bad, it's motivation and it probably means you like your time!
Your thoughts on the topic seem not too far-fetched most of the time ^^
Ooh, TF Estimators! I've had some fun with those, including various custom code. My experience has been mixed, but it appears to be getting better over time as TF has some resources allocated to the development of it. I can only recommend lots of testing/validation as hidden bugs in machine learning code is a real scare and can be hard to find. Good luck, and do tell if there's anything you'd like to discuss on the topic!
Yes, I am all for having some variation, proper breaks where you decouple your mind from the usual work, and so on. I think it is good for both performance and mood (which eventually impacts performance).
I'm training for a 10km race in Dubai in two weeks. It's been tough, but quite fun and it's been going well. The ache from all the runs is not entirely terrible either.
Phew. I am back from the great beyond of Christmas celebrations! I hope yours went well, too. You might have noticed I took a hiatus from MAL forums similar to you.
It's funny you should bring up side project studies, as most of what I've been doing when I finally get some time alone is toy with various code ideas! Just like you I got tired of games as a distraction, and outlet for creativity.
Re: your comment about wasting time, do you worry about age for similar reasons?
I'd say automata is not too far off, as a finished, trained model will similar to an automata transition from certain states to other states and in a state will have new options to go to for the next. This gets complicated, but if you imagine the entire space of transitions and states possible then you'll see lots more interesting groups of transitions and states that (hopefully) make sense according to what your training set implies. This is different from classic automata representations (though not entirely incompatible).
Definitely important not to stem the flow of inspiration with too much pessimism! Even failed attempts might not be wasted.
How did your CAE thing go? I'm really curious, it sounded like a fun problem and I wonder how effective your solutions can be given the weirdness of video footage in general.
--
I've been there, doing Android, and I would rather avoid it. As with anything relating to code though, it's nice to have experienced and learned some of the details of what people use and develop for often.
I think the main difference between hobby and work is whether there is likely to be a need to stop and get away from it to do something else. If that need is more frequent than your vacations, you are really in trouble! For a hobby it's really no big deal. Personally I chose something where it feels like I can always get new inspiration, all it takes is a new challenge. I hope you find something similar.
My turn to say it's been a while ^^ I'm going back to my parents' city just after New Years, but other than that I think things are finally back to normal.
Haha, I know that feeling. Sometimes I feel like my goal in life, aside from my long string of responsibilities, is to find an opportunity to catch up on sleep. Luckily my workplace has good flexible hours and I don't have kids, so I can find the time to sleep even when times get rough. Well, if you are never tired then you are never exerting yourself. Where's the excitement in that?
Yes, I think that is often true. Neural Networks have very high potential, but in practice they struggle at whatever you are not making easy for them. That said, only a few years ago this problem was much more pronounced. Now, often they sort themselves out without you having to tear your hair out over tiny parameter changes or problem details. They feel less brittle, largely due to improvements in training for lots and lots of reasons I could get into. He's right, this is research. This is "bleeding-edge".
Sure, you could say that RNNs learn patterns rather than rules, but do remember that an ideally trained network does generalize well. So, a change that makes little difference in training, say if ABA, ACA, ADA gives very similar output to AAA in all training cases (or even the same), then it is possible to learn to more or less "ignore" that change. How exactly it does varies, but it could for instance de-emphasize (multiply by low number) the impact of the letter that comes after the pattern 'A'.
Some crazy ideas (or maybe they're not crazy at all?) in the memory field is to let the model have ordinary computer memory, actual bytes to store stuff in, and let the network choose when to read and write there. I haven't seen a usable model utilizing that at all, but maybe something for the future?
Paths of Programmers are Painful! Headbutting problems is a daily endeavor. I never thought I was a masochist, but who knows considering I love such a terrible field.
I'm glad your professors are inspiring, I owe a lot of my ability to put in effort to a few very good teachers I have had over the years, including one professor who had some really excellent lectures in AI (the one that also supervised my Master's and nominated me for the award).
--
I'd like to meet such a person. The one born to make Android apps. They must be some alien creature, surely? I hope you find something more exciting to program before your vigor runs out!
I am sure some philosophers would argue philosophy is at the root of everything! Always relevant, always connected. No worries, take your time. I look forward to when you'll share your idea with me.
A pal with a pen. A pen-utilizing pal! Pal-ship over penmanship. It's something else. I feel like I bridged 10 important years (teenage me talking to a lady around my age today aka mid twenties) just through politeness and mutual curiosity. I fancy some old ideas like that ^^
If I wasn't into AI, I'd still be all over code and programming, I'm sure. There's nothing quite like building something quickly through sheer logic. If programming was off the table, I would probably end up in research first and then some industrial application of it later. I've been interested in understanding anything and everything for as long as I can remember. My family figured Medicine fits me, but I at least mildly mind blood, needles, and bodily et ceteras so not sure it's ideal for me. I also never fancied chemistry for some reason, so a straight engineering field is more natural than one with biology and chemistry.
Long work days have begun again now that I am not sick, but it's good to be back there.
Haha, I guess that's a little early for the PhD, but it's good to have ambitions that are attractive to you. Motivation is a more difficult challenge than any single subject, for sure. My journey at uni felt like much more of a battle in cultivating internal motivation and will than a battle of succeeding in any one subject. Hey, this is the awesome part of "nerds", I hope you hold on to your thirst for knowledge!
It really puts things in perspective, doesn't it? With this pace, individuals and relatively small groups of individuals are actually pushing things along. Looking through recent work in a narrow, but important field like RNNs I see some authors over and over in important papers and they do amazing work.
Glad to be of service ^^
Typically I wouldn't say internal memory is a disaster, since the training makes the network learn that sometimes past input is irrelevant. Usually it's a bit pricy though, it can be challenging to learn something as nice as "if input is A, dismiss all previous input and give B". It'll often instead learn that AAA is B, BBA is B, ABA is B, etc. You see what I mean? It's basically memorizing lots of individual cases instead of noticing the general rule that "if last input is A, just give B". The more learning you have to do, the more runtime and the more difficult it is to find a near-optimal solution.
Yes, the capacity and the quality and the nature of the internal memory is a big challenge and something researchers are working on today. For instance, I work with a rule of thumb that anything over 100 steps is going to be hard to remember for the nets I use right now (the technical explanation is not exactly "capacity" though, it's more like the memory wears out over many steps into the past). This is not a hard rule as the network and samples matter a lot, it's just to give you a general idea of what the issue is like.
There's lots of problems! But there's lots of smart people who have already mitigated many of them, so the nets function quite well. There are plenty left to do though, hence my thesis.
Well, it's better to have a long list than no list at all. Just call me Minor Inspiration Man (I am completely aware how lame that sounds).
Your professor sounds like they've got a good attitude! Maybe with enough effort and some aid you can get a solid enough basis that things start to make sense in the Computer Network world. That said, there's only so many things you'll be able to understand in their entirety (as you touched on earlier, immortality please).
Hahah.. Your animosity for Python is something else. I'm glad it's only a tool in a large toolbox, otherwise I'd be tasked to change your mind. Time will tell. For now, it definitely has its problems.
--
Loops are bad and should feel bad. Things like broadcast, parallell map, vectorization in general feel good. Too good! When the programming mood hits and flow is at its peak, these things are pure happiness.
Ah, good. The world needs more Math-y people if you ask me. Please continue to nurture your affection for Mathematics! I'm counting on you.
Oof, mixing Philosophy and Deep Learning sounds like a wild ride way down into the depths. I'm curious what specifically it would be about.. I'd love to hear about it, once you make your final decision.
I'd be an English tutor for you, like my pen pal was for me growing up, but I don't do that sort of embarrassing thing in public!
I agree. That reminds me of a conversation I skimmed today between two people, where there was a lack of substance, yet the messages were somewhat long. In any case, I am not a superficial individual, thus do not believe in superficial discussions. I think an exchange with me is worthwhile. What do you tend to talk about? I aspire to be a psychologist, so I spend my time helping people.
Ahh, I'm wondering about a PhD! The idea really intrigues me, as when you work there is so little time to get some space and do research. This field really benefits from more theory, too, so it's very fitting for a PhD. I am offered one from my workplace if I want it, but there's still so much to do at the job and I'm still learning a huge amount here.
Doubt is extremely important, I very much agree.
Rediscovering things that have been tried is great, though, as you show an aptitude for what makes sense to try. You can also follow the trail of discarded and improved methods to the current State of the Art, which is hugely beneficial for getting into the world of the top researchers who push the field.
First absorbing the knowledge, and then reflecting on it and questioning it sounds completely appropriate to me. Knowledge gives you a good basis, but without reflection no new and interesting ideas will have a chance to appear. Knowledge is power! With it, you can stand on the shoulders of giants. If that's not an advantage in something as difficult as this, I don't know what is.
I promised a very quick'n'easy RNN explanation, so here it is:
As you said, the key to RNNs is that they have internal memory. What this means in practice, is that when a normal net sees an A, they will always spit out the same thing, lets say B. But an RNN has internal memory, a state if you will, which it utilizes to change what it will put out. So, when it sees an A, it can spit out B, but depending on what it has seen in the past (aka what state it is in), it can spit out C instead.
As an example, a network in state ' ' (lets call it blank) can put out B if it sees A. But a net that is in state ' A A ' (has seen two As in the past two time steps) might have the setup that state = ' A A ' + input ' A ' gives output C. This in turn means the sequence 'A' gives B and the sequence 'AAA' gives C. Now the net handles various sequences differently. Easypeasy, right?
Aye, taking something on faith or on low sample size is both uncomfortable and really impractical. Even if you have seen the net behave a certain way, do you have the confidence in why and when it won't behave this way? I am like you, a logical system should if possible be understood through understanding its logic, not vague experiences.
I don't think Python is the true king of these languages for long. I think a language with a somewhat similar style, but better use in production (static types) and better performance (eg no Global Interpreter Lock), will eventually take over if the language does not evolve enough in that direction (it already is, a bit). Complex languages are great practice, if nothing else.
--
It is, and it is also very easy to repeat the last value. Anyone can do it. That makes the net completely worthless, no matter how good repeating the last value is, since then you'd implement that as a rule and replace the net. That said, if a truly great forecasting model only repeats the last value that says something about your data, of course. Maybe it cannot be predicted better any other way.
Mathematics is wonderful, and its usefulness is undeniable in programming. Algorithms are based on mathematics. Machine Learning, in particular, is heavy on Mathematics. Knowing Calculus is super useful for understanding Neural Nets! I don't see any reason to let your interest in the subject die off.
That Philosophy project does sound fun, but you know what I'm going to say: I fancy your "shocking" interest in Deep Learning, I wouldn't exactly mind if you pursued it..
It is late so I apologize in advance if my writing is worse than usual.
P.S: That feeling of "what the heck am I watching?" also comes through every now and then in Monogatari. It's quite the ride, I like it.
P.P.S: Good. I had a peaceful, relaxing day. I hope you have the same.
The wall of text grows ever taller, but I will overcome. Here I go!
I completely agree that it is important to keep the basics in focus. I try to learn what I can from seniors (even though none are experts in AI), and I can tell keeping it simple is a big strength of theirs. Don't let it make you lose ambition, though!
That sort of stuff is exactly what I deal with on a daily basis. If you encounter more, feel free to discuss with me. I strongly agree with him though, keep it simple first. It's very easy to get bogged down in over-engineering and grappling with advanced stuff, halting your progress on the basics.
Fun that you had the idea of replacing pooling with convolutions. I agree it appears to be primarily max pooling, but a convolutional layer is much more flexible than a pooling layer so I imagine you can make ones that behave similarly to any given pooling layer. Speaking of, I really like Colah's blog for understanding networks intuitively. Feel free to ignore the stuff I link if there's too much. I just like to provide good information.
Aw, unfortunate about Google Ed. Nvidia has some great hardware, I use Nvidia GPUs for most of my deep learning (GTX1080 on my laptop, various in the cloud) so hopefully that goes through. I had a designed lab at uni for AI, it was lovely. I hope for the same for you.
I think your points about theory ring true. I also think theory overlooks a lot of the nitty gritty of real life work, as it has to to keep things elegant and understandable. There's a lot of caveats in ML. After you get everything working, I suggest carefully testing your inputs, outputs and results. The crazy thing about ML is that you can have no errors pop up and still get completely wrong results, as you did something wrong during computation of the output! Very scary. Keep that motivation going, though ^^
Hahah, I am partly playing with you. Python has many problems, but I at least respect it for the wonderful opportunities it provides me with. I think a good editor helps a lot with Python, things flow very naturally then. I use PyCharm myself. That said, I think PHP is a dangerous horror. Maybe horrors are fascinating?
Completely agree on the reasoning for choosing TensorFlow, go ahead with that! I like Keras because it lets me produce stuff faster, but the power and learning I achieve in TensorFlow is undeniable, so for an educational tool I think it is far superior. I combine the two in production.
No problem! There are other uses for the bias node, but I think that is the most important one to know.
Applying CNN to everything? Obsessive about it? You nerd, you.
--
I think it could be said that repeating the last value is a good starting point, and the best forecast is likely to be a modification of that value, so yes sort of. The issue is often that repeating the last value, and only doing that, is so good and so easy, the model often reaches the "local optimum" that is that strategy very fast. It is a struggle.
Same for me. This is all driven by interest. I don't have to do ML, my education lets me do most decent programming jobs at a starting level. I just wanted to explore this field, first.
Mathematics and programming! I heartily approve. Please, pour into them everything you can and I thank you for your service.
P.S: I think the Monogatari series is hard to spoil, since there is so much to take in, including how the emotions come through.
I know that feeling of getting bogged down in questions and thoughts. At least the more you get your knowledge straight, the easier it is to make conclusions about your thoughts. That's why I think it's nice to get the basics down before you ponder too deeply. Tough question for your professor! Let me answer it for you, though, check this out: "We find that max-pooling can simply be replaced by a convolutional layer with increased stride without loss in accuracy on several image recognition benchmarks." One key here is the stride, without it you would increase the computational complexity of the problem a lot. So, the answer to your question is yes you can replace them, good catch!
It was partly a light joke, "don't underestimate my field youngling!". It really depends though, as you say performance can be an issue and with difficult real data, accuracy can be troublesome to fiddle with too.
Ah, hardware constraints suck. Your teachers should really apply for something like this, if you ask me. My friends who didn't have a decent computational resource available used this option and barely managed.
If you get a good, optimized GPU architecture going, the speed on a decent GPU is really incredible.
Video! That adds a lot of interesting challenges. So your job is basically to flag suspicious events in the video? I can imagine you need a lot of training given all the variations in light, weather, and normal events. Hopefully the background is not too busy.. Sounds challenging, I'm glad you have help.
So much of deep learning and machine learning in general is doing research, I can only recommend it as well. Though I recommend getting a good test case up and then testing stuff. Nothing like realistic tests to see if an approach is a good fit, or not worth the time.
Hahaha, be nice to Python! The language has lots of great libraries for machine learning and AI, you should be grateful to your seniors and predecessors. I use mainly Python with TensorFlow as well, so if you have any trouble with the code I can probably help out. You can combine TensorFlow and Keras (tf.keras), and when it works it really is quite magical. So easy to set up and add additional features such as early stopping, performance calulcations and logs, model checkpoints, etc. Also very quick to test out any architecture combination you can think of, deep or shallow. I think it took me half a day to set up an autoencoder LSTM built into our entire pipeline, haha! Though that was just for fun, we don't use autoencoders much yet.
Bias in what context? Usually in a neural network, a bias node added to a layer is to avoid silliness relating to the number 0. If you multiply stuff with 0, you get 0. You lose flexibility! Very inconvenient. Through the bias node, you add some positive number (the network can choose with the bias weight) to your input so that you do not end up in a situation where a 0 ruins all your flexibility. Usually, a layer with a bias node is empowered to produce any output with any input value.
There are other biases, such as for gates in LSTMs, and they have different functions that vary with the gate. Is a bias node what you were thinking of?
I say apply it to everything and see what works. Being inspired is usually a great thing, not harmful, though every now and then you go too deep... Get it? ^^
--
No worries, RNNs are weird and confusing, but I like to think of time in neural nets as just another dimension, like the width of an image. It helps. Forecasting is amazing, but its difficulty matches! Very hard to get good results for many different cases, and so many ways to get misleading results. Also so many ways the network can be mislead. Often, the best forecast is just "repeat last value" and so the network might do that, but no one needs a network to tell them that's a good estimate.
My journey into the field is nothing special. I try things. I first came to uni and tried a classic mechanical engineering degree. The first year we were doing technical drawings and the most uninteresting physics I've encountered. I moved on. Next was a CS degree, which was really fun. I really enjoy programming, and the way you can create working programs with only a keyboard and a nice editor. Within the CS degree, I tested some genetic programming subjects and some old school AI (which was in LISP, right before I joined, now it's in Python..). I started seeing the amazing potential. I took some modern artificial programming with a great professor (He's the LISP guy..), and got hooked there. The way you can solve problems that would have been really hard had you used conventional programming techniques is something special to experience, for sure. To summarize, I think programming is the main tool that will be used for most of the amazing stuff coming now and in the near future. I think the flexibility of machine learning has great potential for adding a whole new level on top of that.
P.S: No need to limit your fawning over Kaiki for my sake, he is all those things. Though, I'd rather jump off a bridge than recommend a guy like that to a friend.. I haven't finished Monogatari either, some bits in the final season left to go. I'm excited to pick it up again.
First, congratulations on getting through your (I assume) first deep architecture in detail! That's quite the hurdle, especially with CNNs. Poolings, convolutions, and oh so many layers. Entering into something as complicated and confusing as that, but then coming out on the other side with a decent handle on things is really cool, don't you think?
Hey, don't underestimate anomaly detection. I do quite a bit of it, and it can get hairy once you try to improve how your setup performs. Suddenly, weeks have passed and you wonder where the time went.. What are the anomalies in the pictures like, what are you trying to detect? If you're doing it with an autoencoder, I'd guess the setup is something like:
Train the network on a bunch of (relevant) normal images. Here it learns how to condense the image, and then recreate the image from condensed information in the intermediate layer.
Apply it to an anomalous example. Here the network should recreate most of the image, but should have trouble with the portion that is anomalous since it significantly differs in structure from the normal image, and is not well-represented in the training data.
Look at the difference between what the network recreated and the input image. If and where there is a significant difference (you can do some clever stuff to define difference), that's an anomaly.
Is that something like what you are doing, or do you have a different approach?
Ah, so now the gritty work begins. What sort of language and library are you using to set it all up? The cool part is that once you are done, you actually built a functioning thing that does pretty amazing computation.
As I mentioned in private, I do neural networks for time series. Naturally, that involves lots of RNNs (LSTM/GRU). The field is evolving though, as RNNs have some issues with speed and complexity. Maybe I'll be doing CNNs (or attention-based models) too in the near future, but on time series! Since I work for a company that takes data from factories, I also do some tricks on the data, handle how it comes into the neural network, how we output interesting results, and try to write it so it can handle huge amounts. Luckily we have some data processing and database people doing a lot of the extra work. I do anomaly detection(!) and forecasting (predicting the future).
I often get the job of making "the next new machine learning thing we need", which I find really exciting. Our anomaly detect setup is my design. That said, I'm still young so hoping to level up as time passes. If only deep learning seniors weren't so damn expensive I could learn from them, last person we talked to wanted 300k USD D:
P.S: I hope I don't come off as preachy, I don't intend to be.
Hello, I saw you do CS+AI at uni in that recent forum thread. I wouldn't mind discussing or helping out with anything relating to that if you want. I quite enjoy the field.
I saw that you posted your stuff and deleted it. Why would you do that? It doesn't really matter, though, since from what I saw my method wouldn't correctly predict your MBTI.
All Comments (20) Comments
I do have similar thoughts about aging, partly because my parents are quite old (they had me quite late). I also think about fitness, especially mental fitness. I also think about just running out of time! For me, my days are a resource I always want more of. I guess that tells good things about my mood, regarding what I spend my time on. I think your fear is not so bad, it's motivation and it probably means you like your time!
Your thoughts on the topic seem not too far-fetched most of the time ^^
Ooh, TF Estimators! I've had some fun with those, including various custom code. My experience has been mixed, but it appears to be getting better over time as TF has some resources allocated to the development of it. I can only recommend lots of testing/validation as hidden bugs in machine learning code is a real scare and can be hard to find. Good luck, and do tell if there's anything you'd like to discuss on the topic!
Yes, I am all for having some variation, proper breaks where you decouple your mind from the usual work, and so on. I think it is good for both performance and mood (which eventually impacts performance).
I'm training for a 10km race in Dubai in two weeks. It's been tough, but quite fun and it's been going well. The ache from all the runs is not entirely terrible either.
HAPPY NEW YEAR,
Have an excellent time this year and my best of blisses
It's funny you should bring up side project studies, as most of what I've been doing when I finally get some time alone is toy with various code ideas! Just like you I got tired of games as a distraction, and outlet for creativity.
Re: your comment about wasting time, do you worry about age for similar reasons?
I'd say automata is not too far off, as a finished, trained model will similar to an automata transition from certain states to other states and in a state will have new options to go to for the next. This gets complicated, but if you imagine the entire space of transitions and states possible then you'll see lots more interesting groups of transitions and states that (hopefully) make sense according to what your training set implies. This is different from classic automata representations (though not entirely incompatible).
Definitely important not to stem the flow of inspiration with too much pessimism! Even failed attempts might not be wasted.
How did your CAE thing go? I'm really curious, it sounded like a fun problem and I wonder how effective your solutions can be given the weirdness of video footage in general.
--
I've been there, doing Android, and I would rather avoid it. As with anything relating to code though, it's nice to have experienced and learned some of the details of what people use and develop for often.
I think the main difference between hobby and work is whether there is likely to be a need to stop and get away from it to do something else. If that need is more frequent than your vacations, you are really in trouble! For a hobby it's really no big deal. Personally I chose something where it feels like I can always get new inspiration, all it takes is a new challenge. I hope you find something similar.
My turn to say it's been a while ^^ I'm going back to my parents' city just after New Years, but other than that I think things are finally back to normal.
Haha, I know that feeling. Sometimes I feel like my goal in life, aside from my long string of responsibilities, is to find an opportunity to catch up on sleep. Luckily my workplace has good flexible hours and I don't have kids, so I can find the time to sleep even when times get rough. Well, if you are never tired then you are never exerting yourself. Where's the excitement in that?
Yes, I think that is often true. Neural Networks have very high potential, but in practice they struggle at whatever you are not making easy for them. That said, only a few years ago this problem was much more pronounced. Now, often they sort themselves out without you having to tear your hair out over tiny parameter changes or problem details. They feel less brittle, largely due to improvements in training for lots and lots of reasons I could get into. He's right, this is research. This is "bleeding-edge".
Sure, you could say that RNNs learn patterns rather than rules, but do remember that an ideally trained network does generalize well. So, a change that makes little difference in training, say if ABA, ACA, ADA gives very similar output to AAA in all training cases (or even the same), then it is possible to learn to more or less "ignore" that change. How exactly it does varies, but it could for instance de-emphasize (multiply by low number) the impact of the letter that comes after the pattern 'A'.
Some crazy ideas (or maybe they're not crazy at all?) in the memory field is to let the model have ordinary computer memory, actual bytes to store stuff in, and let the network choose when to read and write there. I haven't seen a usable model utilizing that at all, but maybe something for the future?
Paths of Programmers are Painful! Headbutting problems is a daily endeavor. I never thought I was a masochist, but who knows considering I love such a terrible field.
I'm glad your professors are inspiring, I owe a lot of my ability to put in effort to a few very good teachers I have had over the years, including one professor who had some really excellent lectures in AI (the one that also supervised my Master's and nominated me for the award).
--
I'd like to meet such a person. The one born to make Android apps. They must be some alien creature, surely? I hope you find something more exciting to program before your vigor runs out!
I am sure some philosophers would argue philosophy is at the root of everything! Always relevant, always connected. No worries, take your time. I look forward to when you'll share your idea with me.
A pal with a pen. A pen-utilizing pal! Pal-ship over penmanship. It's something else. I feel like I bridged 10 important years (teenage me talking to a lady around my age today aka mid twenties) just through politeness and mutual curiosity. I fancy some old ideas like that ^^
If I wasn't into AI, I'd still be all over code and programming, I'm sure. There's nothing quite like building something quickly through sheer logic. If programming was off the table, I would probably end up in research first and then some industrial application of it later. I've been interested in understanding anything and everything for as long as I can remember. My family figured Medicine fits me, but I at least mildly mind blood, needles, and bodily et ceteras so not sure it's ideal for me. I also never fancied chemistry for some reason, so a straight engineering field is more natural than one with biology and chemistry.
What about you?
Haha, I guess that's a little early for the PhD, but it's good to have ambitions that are attractive to you. Motivation is a more difficult challenge than any single subject, for sure. My journey at uni felt like much more of a battle in cultivating internal motivation and will than a battle of succeeding in any one subject. Hey, this is the awesome part of "nerds", I hope you hold on to your thirst for knowledge!
It really puts things in perspective, doesn't it? With this pace, individuals and relatively small groups of individuals are actually pushing things along. Looking through recent work in a narrow, but important field like RNNs I see some authors over and over in important papers and they do amazing work.
Glad to be of service ^^
Typically I wouldn't say internal memory is a disaster, since the training makes the network learn that sometimes past input is irrelevant. Usually it's a bit pricy though, it can be challenging to learn something as nice as "if input is A, dismiss all previous input and give B". It'll often instead learn that AAA is B, BBA is B, ABA is B, etc. You see what I mean? It's basically memorizing lots of individual cases instead of noticing the general rule that "if last input is A, just give B". The more learning you have to do, the more runtime and the more difficult it is to find a near-optimal solution.
Yes, the capacity and the quality and the nature of the internal memory is a big challenge and something researchers are working on today. For instance, I work with a rule of thumb that anything over 100 steps is going to be hard to remember for the nets I use right now (the technical explanation is not exactly "capacity" though, it's more like the memory wears out over many steps into the past). This is not a hard rule as the network and samples matter a lot, it's just to give you a general idea of what the issue is like.
There's lots of problems! But there's lots of smart people who have already mitigated many of them, so the nets function quite well. There are plenty left to do though, hence my thesis.
Well, it's better to have a long list than no list at all. Just call me Minor Inspiration Man (I am completely aware how lame that sounds).
Your professor sounds like they've got a good attitude! Maybe with enough effort and some aid you can get a solid enough basis that things start to make sense in the Computer Network world. That said, there's only so many things you'll be able to understand in their entirety (as you touched on earlier, immortality please).
Hahah.. Your animosity for Python is something else. I'm glad it's only a tool in a large toolbox, otherwise I'd be tasked to change your mind. Time will tell. For now, it definitely has its problems.
--
Loops are bad and should feel bad. Things like broadcast, parallell map, vectorization in general feel good. Too good! When the programming mood hits and flow is at its peak, these things are pure happiness.
Ah, good. The world needs more Math-y people if you ask me. Please continue to nurture your affection for Mathematics! I'm counting on you.
Oof, mixing Philosophy and Deep Learning sounds like a wild ride way down into the depths. I'm curious what specifically it would be about.. I'd love to hear about it, once you make your final decision.
I'd be an English tutor for you, like my pen pal was for me growing up, but I don't do that sort of embarrassing thing in public!
P.P.S: That is nice to hear.
Doubt is extremely important, I very much agree.
Rediscovering things that have been tried is great, though, as you show an aptitude for what makes sense to try. You can also follow the trail of discarded and improved methods to the current State of the Art, which is hugely beneficial for getting into the world of the top researchers who push the field.
First absorbing the knowledge, and then reflecting on it and questioning it sounds completely appropriate to me. Knowledge gives you a good basis, but without reflection no new and interesting ideas will have a chance to appear. Knowledge is power! With it, you can stand on the shoulders of giants. If that's not an advantage in something as difficult as this, I don't know what is.
I promised a very quick'n'easy RNN explanation, so here it is:
As you said, the key to RNNs is that they have internal memory. What this means in practice, is that when a normal net sees an A, they will always spit out the same thing, lets say B. But an RNN has internal memory, a state if you will, which it utilizes to change what it will put out. So, when it sees an A, it can spit out B, but depending on what it has seen in the past (aka what state it is in), it can spit out C instead.
As an example, a network in state ' ' (lets call it blank) can put out B if it sees A. But a net that is in state ' A A ' (has seen two As in the past two time steps) might have the setup that state = ' A A ' + input ' A ' gives output C. This in turn means the sequence 'A' gives B and the sequence 'AAA' gives C. Now the net handles various sequences differently. Easypeasy, right?
Aye, taking something on faith or on low sample size is both uncomfortable and really impractical. Even if you have seen the net behave a certain way, do you have the confidence in why and when it won't behave this way? I am like you, a logical system should if possible be understood through understanding its logic, not vague experiences.
I don't think Python is the true king of these languages for long. I think a language with a somewhat similar style, but better use in production (static types) and better performance (eg no Global Interpreter Lock), will eventually take over if the language does not evolve enough in that direction (it already is, a bit). Complex languages are great practice, if nothing else.
--
It is, and it is also very easy to repeat the last value. Anyone can do it. That makes the net completely worthless, no matter how good repeating the last value is, since then you'd implement that as a rule and replace the net. That said, if a truly great forecasting model only repeats the last value that says something about your data, of course. Maybe it cannot be predicted better any other way.
Mathematics is wonderful, and its usefulness is undeniable in programming. Algorithms are based on mathematics. Machine Learning, in particular, is heavy on Mathematics. Knowing Calculus is super useful for understanding Neural Nets! I don't see any reason to let your interest in the subject die off.
That Philosophy project does sound fun, but you know what I'm going to say: I fancy your "shocking" interest in Deep Learning, I wouldn't exactly mind if you pursued it..
It is late so I apologize in advance if my writing is worse than usual.
P.S: That feeling of "what the heck am I watching?" also comes through every now and then in Monogatari. It's quite the ride, I like it.
P.P.S: Good. I had a peaceful, relaxing day. I hope you have the same.
I completely agree that it is important to keep the basics in focus. I try to learn what I can from seniors (even though none are experts in AI), and I can tell keeping it simple is a big strength of theirs. Don't let it make you lose ambition, though!
That sort of stuff is exactly what I deal with on a daily basis. If you encounter more, feel free to discuss with me. I strongly agree with him though, keep it simple first. It's very easy to get bogged down in over-engineering and grappling with advanced stuff, halting your progress on the basics.
Fun that you had the idea of replacing pooling with convolutions. I agree it appears to be primarily max pooling, but a convolutional layer is much more flexible than a pooling layer so I imagine you can make ones that behave similarly to any given pooling layer. Speaking of, I really like Colah's blog for understanding networks intuitively. Feel free to ignore the stuff I link if there's too much. I just like to provide good information.
Aw, unfortunate about Google Ed. Nvidia has some great hardware, I use Nvidia GPUs for most of my deep learning (GTX1080 on my laptop, various in the cloud) so hopefully that goes through. I had a designed lab at uni for AI, it was lovely. I hope for the same for you.
I think your points about theory ring true. I also think theory overlooks a lot of the nitty gritty of real life work, as it has to to keep things elegant and understandable. There's a lot of caveats in ML. After you get everything working, I suggest carefully testing your inputs, outputs and results. The crazy thing about ML is that you can have no errors pop up and still get completely wrong results, as you did something wrong during computation of the output! Very scary. Keep that motivation going, though ^^
Hahah, I am partly playing with you. Python has many problems, but I at least respect it for the wonderful opportunities it provides me with. I think a good editor helps a lot with Python, things flow very naturally then. I use PyCharm myself. That said, I think PHP is a dangerous horror. Maybe horrors are fascinating?
Completely agree on the reasoning for choosing TensorFlow, go ahead with that! I like Keras because it lets me produce stuff faster, but the power and learning I achieve in TensorFlow is undeniable, so for an educational tool I think it is far superior. I combine the two in production.
No problem! There are other uses for the bias node, but I think that is the most important one to know.
Applying CNN to everything? Obsessive about it? You nerd, you.
--
I think it could be said that repeating the last value is a good starting point, and the best forecast is likely to be a modification of that value, so yes sort of. The issue is often that repeating the last value, and only doing that, is so good and so easy, the model often reaches the "local optimum" that is that strategy very fast. It is a struggle.
Same for me. This is all driven by interest. I don't have to do ML, my education lets me do most decent programming jobs at a starting level. I just wanted to explore this field, first.
Mathematics and programming! I heartily approve. Please, pour into them everything you can and I thank you for your service.
P.S: I think the Monogatari series is hard to spoil, since there is so much to take in, including how the emotions come through.
P.P.S: Don't get a heart attack for my sake!
It was partly a light joke, "don't underestimate my field youngling!". It really depends though, as you say performance can be an issue and with difficult real data, accuracy can be troublesome to fiddle with too.
Ah, hardware constraints suck. Your teachers should really apply for something like this, if you ask me. My friends who didn't have a decent computational resource available used this option and barely managed.
If you get a good, optimized GPU architecture going, the speed on a decent GPU is really incredible.
Video! That adds a lot of interesting challenges. So your job is basically to flag suspicious events in the video? I can imagine you need a lot of training given all the variations in light, weather, and normal events. Hopefully the background is not too busy.. Sounds challenging, I'm glad you have help.
So much of deep learning and machine learning in general is doing research, I can only recommend it as well. Though I recommend getting a good test case up and then testing stuff. Nothing like realistic tests to see if an approach is a good fit, or not worth the time.
Hahaha, be nice to Python! The language has lots of great libraries for machine learning and AI, you should be grateful to your seniors and predecessors. I use mainly Python with TensorFlow as well, so if you have any trouble with the code I can probably help out. You can combine TensorFlow and Keras (tf.keras), and when it works it really is quite magical. So easy to set up and add additional features such as early stopping, performance calulcations and logs, model checkpoints, etc. Also very quick to test out any architecture combination you can think of, deep or shallow. I think it took me half a day to set up an autoencoder LSTM built into our entire pipeline, haha! Though that was just for fun, we don't use autoencoders much yet.
Bias in what context? Usually in a neural network, a bias node added to a layer is to avoid silliness relating to the number 0. If you multiply stuff with 0, you get 0. You lose flexibility! Very inconvenient. Through the bias node, you add some positive number (the network can choose with the bias weight) to your input so that you do not end up in a situation where a 0 ruins all your flexibility. Usually, a layer with a bias node is empowered to produce any output with any input value.
There are other biases, such as for gates in LSTMs, and they have different functions that vary with the gate. Is a bias node what you were thinking of?
I say apply it to everything and see what works. Being inspired is usually a great thing, not harmful, though every now and then you go too deep... Get it? ^^
--
No worries, RNNs are weird and confusing, but I like to think of time in neural nets as just another dimension, like the width of an image. It helps. Forecasting is amazing, but its difficulty matches! Very hard to get good results for many different cases, and so many ways to get misleading results. Also so many ways the network can be mislead. Often, the best forecast is just "repeat last value" and so the network might do that, but no one needs a network to tell them that's a good estimate.
My journey into the field is nothing special. I try things. I first came to uni and tried a classic mechanical engineering degree. The first year we were doing technical drawings and the most uninteresting physics I've encountered. I moved on. Next was a CS degree, which was really fun. I really enjoy programming, and the way you can create working programs with only a keyboard and a nice editor. Within the CS degree, I tested some genetic programming subjects and some old school AI (which was in LISP, right before I joined, now it's in Python..). I started seeing the amazing potential. I took some modern artificial programming with a great professor (He's the LISP guy..), and got hooked there. The way you can solve problems that would have been really hard had you used conventional programming techniques is something special to experience, for sure. To summarize, I think programming is the main tool that will be used for most of the amazing stuff coming now and in the near future. I think the flexibility of machine learning has great potential for adding a whole new level on top of that.
P.S: No need to limit your fawning over Kaiki for my sake, he is all those things. Though, I'd rather jump off a bridge than recommend a guy like that to a friend.. I haven't finished Monogatari either, some bits in the final season left to go. I'm excited to pick it up again.
Hey, don't underestimate anomaly detection. I do quite a bit of it, and it can get hairy once you try to improve how your setup performs. Suddenly, weeks have passed and you wonder where the time went.. What are the anomalies in the pictures like, what are you trying to detect? If you're doing it with an autoencoder, I'd guess the setup is something like:
Is that something like what you are doing, or do you have a different approach?
Ah, so now the gritty work begins. What sort of language and library are you using to set it all up? The cool part is that once you are done, you actually built a functioning thing that does pretty amazing computation.
As I mentioned in private, I do neural networks for time series. Naturally, that involves lots of RNNs (LSTM/GRU). The field is evolving though, as RNNs have some issues with speed and complexity. Maybe I'll be doing CNNs (or attention-based models) too in the near future, but on time series! Since I work for a company that takes data from factories, I also do some tricks on the data, handle how it comes into the neural network, how we output interesting results, and try to write it so it can handle huge amounts. Luckily we have some data processing and database people doing a lot of the extra work. I do anomaly detection(!) and forecasting (predicting the future).
I often get the job of making "the next new machine learning thing we need", which I find really exciting. Our anomaly detect setup is my design. That said, I'm still young so hoping to level up as time passes. If only deep learning seniors weren't so damn expensive I could learn from them, last person we talked to wanted 300k USD D:
P.S: I hope I don't come off as preachy, I don't intend to be.
P.P.S: You forgot Kaiki, how could you.
Also, Kaiki and Ouzen are fantastic