Dim Red Glow

A blog about data mining, games, stocks and adventures.

stewing brewing and trying things

Wow its been basically a year since i posted any content (well 8 months since last then 4 months... so yeah) I've been thinking about resetting the blog just to start fresh since its been a good long time since i wrote regularly. as of right now though i'm going to leave it as is, just dont be surprised if that happens.

 

As for new things... i did a remarkable thing in data mining. Basically i found a good way to auto adjust learn rate on GBM. its not that hard but I'll keep the details to my self for right now. It saves a ton of time (as you can imagine) so you dont have to adjust learning rate over and over again on any given setting trying to hone it just right.

 

I havent really played magic in ages. Though i do have 1 event coming up. I havent really played but once since November, So nothing really to write home about there . I did brew up a new deck to use but i doubt too much will come of it other than me having some fun.

 

I've tried stock analysis and investment a couple times since i last wrote. All using data mining to do forecasting. Nothing facny just conventional stock investment (buying then selling when up). it seems to work well for a time but then it always goes to crap after a month or so. it might be time to move to volatility trading. For now i've pulled out and am letting automated mechanisms run on collective2.com to see if they can produce good long term gains. If the do then that I'll open up my results at securitiesminer.com (behind a feathercoin paywall) and make the portfolio  available on collective2.com . right now i'm still in the phase of both debugging the process and trying to get positive by honing the settings for buying and selling. this all assumes the predictions are merit-able.

 

I've been stewing on games to work on too. I have 2 ideas i want to do, 1 will be harder than the other. I'll post more about them when i have something more substantial to talk about. (other than ideas in my head) I will say that i'm looking at babylon.js for 1 of them. I just can seem to get in to unity. So i'm going to try that.

 

Since I mentioned games... I've played tons and tons of rimworld. in fact I now have a number of world records over at speedrun.com/rimworld :) i use the twitch account 1daft ... nothing really to say beyond that, just thought it was neat. the game doesnt really draw much attention so its hard to be proud of any kind of real record. still i have fun

 

Also in the news, I've spent A LOT of time looking at encryption and specifically the discrete log problem. specifically solve for x .... a^x = b (mod N) where a, b and N are known. its a great/fun problem to work on but alas a good solution still eludes me. which... i mean is probably best. solving that breaks both RSA (https / primes) and Elliptic-curve cryptography (used in cryptocurrency among other things).. though it would allow people to factor large numbers in trivial time. so that's something.

 

Finally, mentioning cryptocurrency... I've been waiting for feathercoin to pop again but alas it hasnt. its drifted down even has bitcoin has started to resurge. I've watched bitcoin go for 3500 to 12000 and in the same time feathercoin drifted down from 0.08 to 0.02 ... so yeah. disappointed. i think (as i've said before) people need a way to spend it to help encourage interest. i mentioned a feathercoin paywall on the securities site. i'll also consider doing it for the game (assuming it happens)

 

It's been another 4 months

Hello hello and hello! If you are still with me then my hats off to you, if this is your first time here well welcome!

 

So, what's new? Oh i dunno. the yourdatamining.com website seems a bust. no one is interested in using it. (not real surprising to be honest) I advertised a little and it was interesting to see what search terms lead people there. Usually no one bothered to use the site much and just left. And even if they did stick around, they never came back. So basically it's a no go.

I'm not done though. I've got a new plan. I'm the new owner of http://www.securitiesminer.com/ I'll be setting that up in a few days. In short i will be forecasting various securities. I'll start out with stocks and probably add crypto currency right away. i might add regular currency exchanges as well. The program will forecast 120 day out valuation (price) change... actually, it'll predict percentage changes but same difference really.

I already do this for my own personal investment. i'm just going to expose the results to the world. I'd love for that to make me enough money from the site so I don't have to have a day job (through advertisements) . I won't go on about any more details. if you are a long time reader you will likely already know enough about me and my interests to know this is right in my wheelhouse.

What else has been going on... I guess I'm finally done with magic the gathering. I went to eternal weekend. it was "okay" felt like a small grand prix. I was disappointed to find out that the old school champion ship wasnt on site and was run by another organization (eternal central) I was also disappointed at the rules they had where they didnt allow reprints. It kind of spoils the point imho. the game is the game. not 1 or 2 particular version of the cards. So i skipped that event as my deck had reprints of various cards and i could have shown up with a substitute deck but the whole thing left a bad taste in my mouth.  Anyway, next year I might still do vegas but i dunno. interests change.

I'll be doing go st. louis in april (the marathon) so i've started training (barely) so between that and rimworld (a game i really like ... you can see me play on twitch ... user 1daft ) and the securities site ... and a day job. i keep busy :)

 

making data mining better

I've been working on a new version (seems like it's been a while) of my data miner. the last one i worked on /got working used the logistic function to build curves that represented the left and right side of each splitter in the trees I built. This worked really well. I'm not quite sure it was state of the art or not but when combined with either random forests or with gradient boosting the results were better than my previous techniques.

So what's the new one do? well that would be telling :) . maybe a better thing to say is how much better is it? not much but it does seem to be better. some run of the mill tests on the last kaggle contest I worked on (Home Credit Default Risk) show it's producing around .74 auc compared to .73 with the old technique. "That's terrible!" no, it's not :) That is i'm just using 1 table they gave us (the main one). And I'm doing nothing special to it. i only created 1 feature. my results arent going to be really all that with so little work. That is, to me that seems pretty decent considering what it's working with. you can read the winner's solution with auc was .8057 on the leader board  here https://www.kaggle.com/c/home-credit-default-risk/discussion/64821

both with home credit default risk and santander value prediction I didn't really put the effort in I should have to get the training data setup. especially with santander. That one was starting to get interesting then they discovered the leak and i tried naively to implement it with basically 0 success and said. "meh. even if i get this working right.. i'm pretty far behind the curve. this has stopped being a contest I really want to do."

Another thing i've come to realize, the genetics can do the same thing my normal data mining does, but a better way it seems (at least for the run time) to use the genetics is to just make new features. I'm still working on mechanisms (in my head) for deciding how to determine a particular genetic out put is ideal. correlation seems ideal but also ideal is something that correlates highly with the solution but not with any given features. That is maximize one and minimize the other. I'm just not positive on the best fitness test to do that. Once you've done it once, you do it again and again. since each feature is independent they ~should~ add.

Incidentally, I'll be using the new aforementioned technique on my new for-the-public data mining webste ( Https://yourdatamine.com ). I've spent a lot of time cleaning up my code and moving it in to a wrapper so it's easily portable to the website. cleaning it up was good too, i got rid of LOTS of old code that either did nothing, wasn't used or  was being used but shouldn't. I also sped it up the load and made it so you can do new cross validations willy-nilly. previously i setup that in the database.  Now i can just pick a number of CV folds to do and the code will make new onest on the fly (or it can still use the DB). having a consistent cross validation does give a reproducible result but sometimes the random selection happens to be weird. so mixing it up can be good.

The website is going to get a new page that lets you know what kind of results you can expect from a training set but doing analysis on it using the on-the-fly cross validation and giving you expected error margins.

I made a little effort to get all that to run in memory. The processing was always done that way but the datasource used to come from a database (as i was mentioning).when i put the code in to a package I also wrote a class that directly imports the data in to the structures it needs. This has 2 upshots; 1 people can rest assured I dont keep or even care about their data. (i setup https for that very reason too). And 2 the website has no database to maintain. It's hard drive footprint never increases (though memory varies by current usage). if the site becomes popular, i'll make it so the requests will go to new instances/farms/application servers. Those will be able to be spun up easy peasy! so its a super scalable solution. (or should be)

 

new website on data mining and a neat video on heat death

Hello folks! Things keep moving forward (time is like that). I've been busy today. I've started to put together a website for data mining https://www.yourdatamine.com. This is a site that allows anyone with a few csv files or tab delimited files to do some data mining. I might monetize it later (make it pay per use) but till i gauge the interest I'm going to make it free. I need to finish it too of course. But I think I can have a basic version ready by end of day tomorrow. i only started on it today so that's something.

I also have continued to update projects on http://project106.com. I need to update it tomorrow as well (just to bring the weight loss section up to date and add a project for yourdatamine.com. But really things are going well there, even if updated sporadically.

Finally, I wanted to throw out a link to a video done by PBS space time on the blog's name sake.... the heat death of the universe. What? you didn't realize that's what dim red glow referred to? I .. I don't know you any more. *grin* seriously, the name dim red glow is a reference to the idea that at the very end of the universe all that will really be left are protons zipping around the universe slowly getting stretched out more and more as they grow fainter and fainter due to dark energy stretching them out over an ever increasing area. Eventually leaving nothing but a dim red glow. Anyway here you go, it's a good one  :) enjoy.

Vacations and Project 106

Real quick update.... not a whole lot going on right now.I went to aruba and vegas since i  last posted.

the trip to aruba was the worst vacation i've ever been on. why? well while nothing catastrophic happened lots of really crappy stuff did. here's the list. I hurt my back a week before going... making sitting for long times hard and a lot of things impossible. My connecting flight was canceled. I got stuck over night in chicago (despite it being on a new noon when the flight was canceled) which lost me a day of vacation (thanks American Airlines :( ) my next morning flight was at 5:30am. i usually get up around 11am... and was already on 3-4 hour sleep so I was exhausted in a way i shouldn't have been for a leisure vacation. They lose my bags. lesson learned pickup your bag if you have a canceled flight. dont leave it overnight.

My accommodations weren't a resort. This my fault but was a shock when i got there. I thought it was but it was a condo :( This means no comforts or even a gift shop. everything i need is about a mile walk away. Like you know, clothes i didnt have cause of the lost bag (thanks american airlines). There is no bar (wet or otherwise), no place to get clothes/gift shops, no restaurants, no casino, tv was basic cable, no... well anything except a pool and a nearby small public beach. even people didn't exist. the place was so empty. the pools were nice though. There was 1 near by restaurant that did a special 8 course meal thing i did. but otherwise it was a pretty giant waste/not a vacation.

The island is a desert, so not to much to look at (cactus and everything) and it was so windy! I skipped/missed 2 of the 3 excursions i planned because of my back and the canceled flight. My ear got clogged with wax Saturday night right before the 8 course meal (so i had this delightful ringing muffled sound in right ear for the rest of the trip). Turns out there are no pharmacies open in Aruba Sunday. Not that they really could have helped i didn't get it cleaned out till Wednesday at the doctors office. i had allergies Saturday night... and full on sickness Sunday night (flew back Monday). So yeah. all in all it was terrible/felt like work.

Vegas was the exact opposite. i stayed at the Belagio and while it was really nice they did have a few bad things that surprised me. the sink liked to suck air when someone else in the hotel flushed a toilet which means it would go "glug" in the middle of the night. the hall smelled of mold and clearly had a broken pipe right before i got there... so yeah odd for being a fancy hotel.

The pools were amazing as was the food. i saw 1 show (Michael Jackson one). I think i'm just done with musical shows cause it was kinda boring/not my thing. Also I played in the vegas GP which went terribly but they refunded my money. the did this to compensate us for waiting around. They had technical issues so we sat for like 2.5 hours doing nothing (they also gave me a free entry to another gp later that year. i was done with GPs but... free entry so yeah maybe i'll do one more)

I got pretty badly sunburned at the pool. apparently the shirt i wore down took off a lot of my suntan lotion :( let that be a lesson to you all; reapply at the pool. the pool though, amazing! drinks/good times/books/swimming/lots of people. yeah no complaints there.

Finally (unrelated) i've put together another website to track the various things I've been working on. check out project106.com when you get a chance. The site itself is just a living document about the rest of the projects. Many of which I'll mention here eventually.

 

wow talk about leaving people hanging

hah, sorry about that. 17 hours to go and i disappear! i did not win *grin* the contest really went all topsy turvy when they released the results. everyones score got worse and the people got rearranged. more than a few (myself included) people said their best result was not selected. I fell to sub 100. The winner was someone who moved up a lot, and i heard from at least 1 person in the forums that they had a better score than the winner but it wasnt selected.

Why did this happen? well, probably due to the size of the data. it was a really small sample. It was also possibly due to some deliberate sampling the contest runners did (where certain things weren't expressed in the training data) but really sometimes it's just bad luck too. But that still leaves me in a great position. i've not rested in the time between then and now. I've been applying the stuff i've learned. honed the code. improved the "Fitness" test.

Ah yes, so i experimented with all kinds of ways to score results.... ways to test the results, fit the results...etc. Right now the thing i think is best is to bag the data. train on the 63% of the data (and do the linear algebra for fitting on that). score on the 32%. use the score to evaluate your accuracy of the that stack and modify the weight the stack as it gets added to the final answer. I also score (fitness test) it by the same method i mentioned previously (self similar scoring) as well as a final modification I made the correlation coefficient's calculation internally.  this last one is the big and since it does no harm to share, i'll do that.

Normal Pearson correlation coefficient https://en.wikipedia.org/wiki/Pearson_correlation_coefficient basically takes a sum of all (x - avg(x))*(y -avg(y)) where x is the prediction and y is the answer. it then divides by the number of elements and divides again by the standard deviation of x and standard deviation of y. (multiplied together) My change is to leave that. then also do a 2nd calculation on the squared of (x - avg(x))*(y -avg(y)). then divide that by the number of elements and divide that by the variance of x and y again multiplied together  (aka standard deviation squared). I take that result and multiply it by the original value. this new value is my "true" coefficient. or at least close to the truth.

I actually left out a step.. i dont allow any value to be over  1. so if it does go over 1 i invert it. dont ask me why but this seems to be best.... that is .5 coefficient is roughly equal to 2 overall (in terms of value for analysis)  .33  = 3 ..etc. so when i multiply the two numbers together they are already in <= 1 state. I also tend to do an Absolute value on the original coefficient since negative or positive will get adjusted by the linear algebra (the squared is always positive).

So that's the skinny. that works best (at least so far). when i dont do that i get random over fittings to the training data.... spikes that 1 value here or there that correlate that make it look better than it actually is. The squaring deals with that. using cubes and higher powers might help, but after trying cubes in there, i saw no real value-add.

what am i using this stuff for? stock analysis :)

 

17 hours left...

There are 17 hours left in the transparent materials contest. despite lots of changes i havent managed to move my score in a few weeks. i have though made my code more efficient (add a few mechanisms that help it get to the best score faster). And I added a mechanism i think will help a lot when there is a larger data set. I can flag train data features to get passed forward to the linear algebra that fits the correlated value to the actual value. with this tiny dataset though all it does is seriously overfit.  So, I turned it off. I also tweeked the scoring mechanism to work very nearly (only 2 differences the final run splits the data in half and uses 2 different linear algebra computations. and it also dampens the results merging them with the average for each half.) to the final adjustments... basically i use the linear algebra in there too.

So where does that leave me. i'm doing one final run right now. it'll either be done or be close being done by 6 tomorrow. (i can submit it at any point along the stage of evolving to an answer). so maybe it'll give me a better score, but probably not. We'll see! regardless its look like i'll have a top 10% finish which is nice its been gosh 4ish years since i had one.

 

 

 

 

i guess things are looking up

Work still continues on the client/server code. i had some real issues automating the firewall access, but i think i got that sorted now. i set it aside after that (that was sunday) I'll probably look at it some more tonight.

I've been trying some slow gradients in the boosting part of the algorithm. i think the idea here is less about picking up other signals in the data and more about dampening the noise i introduce. instead of 70% of the answer put in. i'm doing 40% now. this is working pretty well. i get the feeling 20% might work better still. the idea being that if you are introducing noise you want your noise to be same "level" as the background noise, so on the whole you arent adding any (since noise ideally hovers around 0 and the various noises cancel). the signal while only at 20% is still signal and does what it's supposed to do. (identify the answers) the 40% run is still going, i'll probably start on the 20% run before i  get the client/server version done. 

I started looking at the data its using (just for fun) turns out its using a lot atom position data.... which is weird. I read that there is a standard "best" form for describing crystalline structures and these have already been optimized for that. So maybe conceptually certain atoms positions (atom 20's X position for example) are telling to the details of the crystal. but this seems wrong... at least a really terrible way to get at the data you want. I might do my 20% run without the XYZ data turned on just to see if it: 1, produces better results and 2, has results that make more sense.

 

 

 

bias bias bias...

Work continues on the client/server version of the genetic algorithm. but the pressing issue still remains, bias. the results i get locally at the end of a run are usually far better than the number 1 spot on the leader board... but my submissions are never that good. i think i found 2 things that might be causing it.

The first was some code i had in place from when i was trying to do rmse instead of correlation coefficient. I was calculating an average value to be used for "filler". the genetic algorithm has 2 major things going on. 1 is the prediction and the other is when the prediction is used.... when its not used it uses the average value (actually its not a hard fast switch but a weighting based on the a 2nd prediction going on.) the point is the value being used wasnt the average for that stack (its a series of stacks each adjusting for the error of the last) but the average for all stacks. the difference is the average of the current stack is usually 0 (first stack it is not). the average for all stacks is the average value of the training set's average. I dont know how much that matters cause of the linear alegbra at the end adjusted the correlation values to actual values... but it certainly didnt help.

The second was the idea that even with the self similar approach i'm still fitting to my training data... and all of it.... so bias is unavoidable. i might have a way to fix that. basically i'm going to treat the entire stack as one fitted answer on 63.2% of the data. (that's right this is bagging) by using the same exact training data portion over and over again as i fit my genetics to it (its still self similar too). I can at the end, when i'm done improving, take the remaining results and use those to figure out how i should adjust my correlated predictions. in short the hold out data becomes the unbiased mechanism i use to scale the correlated values. I could also use this to get an accuracy if I wanted. I might, but right now I'm just using it to do the scaling in an unbiased way.

So those 2 should help! there is a 3rd thing i thought of too but its unrelated to bias. i'll try it later when the client/server code is working. instead of using least squares to fit the correlated values. i think fitting by the evaluation metric might work better. sqrt(sum((log(prediction +1) - log(actual +1))^2) / n ) see here https://www.kaggle.com/c/nomad2018-predict-transparent-conductors#evaluation . the thing is, to do that i have to implement it myself. i couldnt find any accord.net code to do it

i just can't stop working on genetic algorithm stuff

hello hello hello. i don't even know where to begin. I guess some short stories, the improvements on the genetic algorithm have been steady and successful. I do still struggle with bias some. I also struggle with speed. now a little more on each.

For bias i'm pretty certain my strategy of scale-able scoring (or whatever i'm calling it) is the way to go. that is regardless of the size of the sample the sample scores the same. the more samples of varied size the more accurate the score is (with more certainty). Basically, you use the worst score from the group of samples. you should always include a full 100% sample as well as the various sub samples, but i run that one last and only if the sub-samples indicate the result is still qualifying. in fact i quit at anytime the score drops below a cutoff. this actually makes it faster on the whole too.

For speed, i've managed to get some sample GPU code to work, which is great! but alas, i haven't found time to write client/server code and implement a distributed version of the genetic stuff. I will, i just need more weeks/weekends. this will hopefully give me something like a 50-1000x boost of processing power.

All this work has been on https://www.kaggle.com/c/nomad2018-predict-transparent-conductors which is rather ideal for my purposes here. you can read the indepth on-goings here https://www.kaggle.com/c/nomad2018-predict-transparent-conductors/discussion/46239 . I'm really hoping i end up in the top 10 before its all over.

There also happens to be a new contest https://www.kaggle.com/c/data-science-bowl-2018 which needs image analysis to be completed.... however! I think this might be also be a contender for the genetic algorithm, though maybe a different version. I could certainly load the images in to the database and let the genetic algorithm figure out what is what... but i think there might be a better way. I think it might be better/more fun. to  design a creature (yes creature) that can move over the board and adjust its shape/size, and when it thinks its found a cell it it sends the mask back for a yes/no. after, i dunno 1000 tries it is scored and we make a new creature that does the same thing... breed winners etc etc. or we could go the reinforcement route. whenever it sends back a mistake we tell it "bad". and when it sends back a success we send back "good". in that way there would be only 1 organism and the learning would be at a logic level inside of it instead of having new versions of itself over and over again. I haven't decided which i'll do, but i think its probably something i'd get a kick out of writing.