Instruction
stringlengths
0
86.4k
Response
stringlengths
0
127k
Very grateful for these series of notebooks. I faced some troubles though and have a couple of questions. I'm the very beginner in this field, so, please, don't be surprised by my questions. 1) Is it ok that now we have only near 140k imgs in our training set (thought it should be near the 400k at least)? 2) The main question. What to do with postprocessing? I understand that we should somehow use pickle file bins, but i've never used such files. Can anybody, please, tell me what to do or give some link to clarify it.
Yes we remove a lot of files without positive labels in this prototyping set I'll be showing that shortly.
Kaggle is currently experiencing a bug. It looks like you can't add GPU to any kernel that wasn't previously committed with GPU before 22:00 UTC, October 21. To see if you have a GPU installed, run the following command: ! nvidia-smi
Good to know, thanks!
Kaggle is currently experiencing a bug. It looks like you can't add GPU to any kernel that wasn't previously committed with GPU before 22:00 UTC, October 21. To see if you have a GPU installed, run the following command: ! nvidia-smi
My current work around is to open an old kernel that was committed with GPU and delete all the old code. Then copy paste new code. Then I get a GPU.
Yep, I got the same error today. My "fix" was: Create a new kernel with GPU enabled, re-add datasets and copy all code. I think it's related to the GPU toggle option.
Thanks for the workaround! =]
That's not good. 😬 The kernels team is looking into this now.
And vice versa. If a kernel was last committed with GPU, I cannot switch it to become a CPU kernel.
That's not good. 😬 The kernels team is looking into this now.
Thanks. Something weird is going on. Only kernels that previously had GPU on their last commit can get a GPU. Any new kernel or kernel that did not have a GPU on their last commit cannot get a GPU.
Looks like an issue from kaggle's side. I am getting the same error.
Ah, ok.. Thank you
CV 0.01375 LB 0.01408 but this is very preliminary. I'm using cross validation. As always.
have you ever worked with high resolution pathological .scn photos? i need some advice
Beautiful kernel, thanks! I am having trouble to access feature names from full_pipeline using get_feature_names(). Do you have any suggestion?
You're welcome! I think you cannot access the CountVectorizer class from the full_pipeline. Because the whole idea of the pipeline is to run all preprocessing steps at once. You probably want to build a pipeline like this, when you're done with the EDA part and want to test different models and hypotheses. You can build a shorter pipeline, that'll end with the CountVectorizer and extract feature names from there. You can also add model training to the pipeline as well. Another way to have more control over separate steps, but still having the full power of a pipeline is to use a version control system, specifically customized for data analysis, like DVC.
Hi guys, Have some background in statistics, EDA, supervised ML techniques and energy sector. Have been participating in competition on other websites too and using Python since last 2.5 years. If anyone is interested in working together, please reach out.
Hi Pratik, I'm also interested in teaming up! Your experience and domain knowledge seems wonderful. I don't have strong ML background, but marked 8th by finding magic feature in latest competition so my insight may help you.
I use EC2 all the time, although I use RStudio with it and have no experience using python notebooks on it. I do use spot pricing and it is wayyyyyy cheaper than on demand. I accidentally used on demand once and racked up quite a bill... The only problem is spot instances is that if the price goes over your bid price then your instance will be shut down, which could be a problem depending on what you're doing. I usually set my bid price somewhat high so I've only had an instance terminated on me once. A lot of what I do on EC2 involves creating many models in a loop, so to protect against any terminations I will save files in the loop and regularly create snapshots of my instance every few hours. This way I can create a new instance with the snapshot and start again from where I left off.
Doesn't creating a snapshot implies shutting it down ?
Thanks, nice study! I don't understand this statement: "A solution to avoid scale errors is to normalize the values from 0 to 1"? Can you elaborate a bit more? Thanks
have you ever worked with high resolution .scn files? I need some advice
Hi , can you explain further what might be happening, step by step? Are you able to provide screenshots of what you are seeing?
There was another comment on my comment in the discussion and I got an email. A total of 400 emails were received in 7 minutes.
I think sites (0, 8) and (7, 11) are the same
Wow!! nice intuition. I'm not aware of that. May be they are very close to each other.
Particularly effective is the use of the t-SNE method for clustering data from the mouth of an autocoder. I attached to each point the image to which it corresponds - it was very spectacular in https://projector.tensorflow.org/!
Thanks man
Hi guys, Have some background in statistics, EDA, supervised ML techniques and energy sector. Have been participating in competition on other websites too and using Python since last 2.5 years. If anyone is interested in working together, please reach out.
can't find you in the list of users for teaming up. have you registered yourself for the competition?
What does model mean? model weight?
YES.such as pth file or others.
Nice 👍
thanks
Just like I wrote in another thread, my 5/5 folds solution scored lower than 3/5 folds solution on public LB so I expect shakeup. I can finish with gold I can finish with bronze. Or without medal?
we still not 100% sure that how many of us actually fighting for overfitting the public lb,so relax and trust your model :)
Thanks for sharing! Have you ever tried another autoencoder method? (ex. Varitional autoencoder ...)
I tried a few options but this one worked best for this problem. I don't remember exact results for the other approaches though.
wow. Impressive Kernel-Presentation! It'll take some time to analyze it. Thanks for the lesson. up
i am glad it helps :)
After you introduce an explicit alias in a query, there are restrictions on where else in the query you can reference that alias. These restrictions on alias visibility are the result of BigQuery's name scoping rules. ... Aliases in the SELECT list are visible only to the following clauses: - GROUP BY clause - ORDER BY clause - HAVING clause Read more here. Since you reference the y alias in the WHERE clause, BigQuery throws an error. Replace y with EXTRACT(YEAR from trip_start_timestamp) or use a CTE if you want to keep it :).
No, I think the WHERE clause is executed before GROUP BY.
Nice kernel - although ARIMA is of no use on industrial datasets
I can only say ARIMA may be no use of 'your' industrial dataset.
Hi , thanks for your feedback! You can request new Python packages by submitting a pull request here: https://github.com/Kaggle/docker-python
thank you for your answer, here it is : https://github.com/Kaggle/docker-python/pull/629
Hi guys, Have some background in statistics, EDA, supervised ML techniques and energy sector. Have been participating in competition on other websites too and using Python since last 2.5 years. If anyone is interested in working together, please reach out.
Interested in Teaming up. Let me know
Hi, I am a post-graduate student majoring in data science, and a beginner in kaggle. I want to use my knowledge to solve practical problems. I have some experience in Python/Pandas, and familiar with some basic models. I am from Chinese Mainland, and now study in hongkong. If anyone is interesting in working together ,and don't mind communication remotely ,my email is nanbei629@gmail.com. Thanks
have sent the request to you.
Kaggle is not asleep, i just need to wake up !!!
Your picture betrayed you! I am originally from Lorient.
Nice kernel..keep it up I upvoted it . Please see my work for Car Prediction MLR and please do upvote. Any suggestion is highly appreciated.Car Price Prediction
Thanks , will chk out definitely.👍
nice !!!
thank you
Hi Jeremy. I'm studying this in depth right now, and I was wondering was the rationale is for keeping ALL the rows with a label, and then only HALF of the resulting number of rows without a label. Doesn't that result in oversampling the positive class when, in reality, this class is less represented than "not any" ? Are there any reason why we would want our model to believe that most of the time there is hemorrhage when most of the time there is not ? Also, I'm trying to read the output files. I'm using: def png16read(self:Path): return array(Image.open(self), dtype=np.uint16) as found here But then, I can't convert uint16 to a pytorch tensor. Any type of signed int is supported, but only uint8 is on the unsigned side...
I understand the rationale behind the prototyping DS, the same strategy you use in the fastai ML course with bulldozers. That makes sense. I was just worried showing too much of the positive class would be troublesome, but then I suppose I could iterate a few times and create different undersampled datasets or something alike... Thank you for your time !
So far best is NN with CV 0.01300, LB 0.01343 Tree based models seems to be performing significantly worse here (at least on CV for me).
, That's funny. Your comments made me start NN!!
So far LGBM GroupKfold CV 0.01262 LB 0.01346 NN GroupKfold CV 0.01236 LB 0.01341
Yes
Thank you for this, but there is a thing or two that I'm struggling to understand as a brand new user and would very much appreciate any feedback. This competition requires the submission to be in the form of a CDF. So my approach was to create a PDF for each individual play ID which is based on varying factors and then converted into a CDF. I'm having trouble understanding where the testing data comes from for this project because the submission requires a 3438 row submission file. I can see where you reference reading and making the predictions but I suppose I just don't understand how you opened it and accessed it. I'm assuming that it is hidden from our view? And when you state the following, Make your predictions here pred_df[,] <- 1.0 The 1.0 is just a placeholder and this is where we create the "PlayId, Yard-99, ... ,Yard99" CDF prediction values? And is it running through the training and prediction data sets at the same iteration? If so, does this not mean your predictions at iteration 10 and iteration 3000 would be created by different values in the same model and create an unnecessarily high run time due to how much larger the training set is when it has already ran through the entire test data set? Or is it running the entire training set and then the entire test set and then it then breaks the loop after only one pass? Sorry about all of this, I'm just very confused as to how the submission system actually functions and why it is only working in python, which I have only had minimal interaction with. I just haven't been able to find any solid explanations of any of this. Thank you for anything you may be able to help with.
Thank you so much for taking the time to help clear that up! I really appreciate all the help!
a list of interesting swish like activation https://github.com/digantamisra98/Echo e.g. swish, beta-swish, mish, beta-mish ``` efficientnet.py class Mish(nn.Module): def init(self): super().init() def forward(self, x): return x *( torch.tanh(F.softplus(x))) https://arxiv.org/pdf/1801.07145.pdf BETA_SWISH = 1.125 class BetaSwishFunction(torch.autograd.Function): def forward(ctx, x): result = BETA_SWISH*x * torch.sigmoid(x) ctx.save_for_backward(x) return result def backward(ctx, grad_output): x = ctx.saved_variables[0] sigmoid_x = torch.sigmoid(x) return grad_output * (sigmoid_x * (BETA_SWISH + x * (1 - sigmoid_x))) class BetaSwish(nn.Module): def forward(self, x): return BetaSwishFunction.apply(x) ```
Thanks! Why is it more memory than a plain Mish function?
What's the meaning of A\C and (p(i),0)? I can't understand well. Can you explain for it in details?
Sorry for my unclear illustration. A\C means A setminus C (or A - C). We regard the classes in A\C as weakly negative, so the CE(p(i), 0) item appears in our loss calculation. The reason why we do this: The labels on the segments are insufficient( We may know there are (or are not) cats in some segments, but it is hard for us to know the existence of the other 999 classes). And we found that just using the segment-level annotations might bring a higher average predicted confidence for each segment. So we consider utilizing the video-level annotations of the videos that the segments located in. Our basic assumption is: If there are no dogs in video1, segments of the video1 will not be. So for a certain segment, most categories in A\C can be weakly annotated as negative categories. We calculate the average cross entropy val among classes in A\C and make it a part of our final loss function. Rough experiments we do: Train Mix-NeXtVLAD and Mix-ResNetLike Models with new&old loss function. Original inference strategy is used. Top k we set: 1000.
Hi guys, Have some background in statistics, EDA, supervised ML techniques and energy sector. Have been participating in competition on other websites too and using Python since last 2.5 years. If anyone is interested in working together, please reach out.
have sent the request
Just like I wrote in another thread, my 5/5 folds solution scored lower than 3/5 folds solution on public LB so I expect shakeup. I can finish with gold I can finish with bronze. Or without medal?
null
So far best is NN with CV 0.01300, LB 0.01343 Tree based models seems to be performing significantly worse here (at least on CV for me).
and , you made me start with lgb, I'll report it it comes close to my NN.
Nope it doesn't work. I tried editing it but as soon as I hit refresh, it reverts back to the truncated state.
Even this doesn't work. Let me know if you fix it. I would really like to publish in the forum.
What does model mean? model weight?
I think no need to upload weight. just ipynb/py which produces weight is need to upload. I’m not confident. but weight file would be large....
Hi guys, Have some background in statistics, EDA, supervised ML techniques and energy sector. Have been participating in competition on other websites too and using Python since last 2.5 years. If anyone is interested in working together, please reach out.
Thank you!
So far LGBM GroupKfold CV 0.01262 LB 0.01346 NN GroupKfold CV 0.01236 LB 0.01341
GroupKFold on GameId?
Hi guys, Have some background in statistics, EDA, supervised ML techniques and energy sector. Have been participating in competition on other websites too and using Python since last 2.5 years. If anyone is interested in working together, please reach out.
have sent a request for teaming up
So far best is NN with CV 0.01300, LB 0.01343 Tree based models seems to be performing significantly worse here (at least on CV for me).
, thanks for sharing your experience - once you said it is possible, I got my LGB also quite close to NN level ;)
So far best is NN with CV 0.01300, LB 0.01343 Tree based models seems to be performing significantly worse here (at least on CV for me).
, I'm using quite simple tabular NN for now. Big enough to allow features to show themselves, but small enough to keep running time small enough - currently in submission mode whole notebook takes about 30 minutes.
I just discovered Neptune.ml. This platform is much easier to use even than SakeMaker and it's raison d'etre seems to be to support Kaggle users. It's fantastic!!
Hi. Thanks for sharing. I've subscribed to a Neptune basic plan. I don't really get the idea, starting from this conversation about AWS. If it does provide an execution environement, why and where should one start by installing neptune-client ?
If you want obtain reproducible result, then I can recommend you use PyTorch + Catalyst instead Tensorflow + Keras.
Many thanks
This is a great case, hope help you: https://www.kaggle.com/imnitishng/classification-model-training-heng-s-code
in the kernel,these code are used to load weights: ``` if initial_checkpoint is not None: state_dict = torch.load(initial_checkpoint, map_location=lambda storage, loc: storage) #for k in ['logit.weight','logit.bias']: state_dict.pop(k, None) net.load_state_dict(state_dict,strict=False) else: load_pretrain(net.e, skip=['logit'], is_print=False) ``` and here is used to load hyperparameter: #optimizer.load_state_dict(checkpoint['optimizer']) and here is used to save hyperparameter: 'optimizer': optimizer.state_dict() You need to uncomment them in the kernel
I was eagerlyyyyyy waiting for this. Thank you doing this. Are you collecting the questions you are going to ask? If so, I would like add few things
Yes, please post them in this thread.
This is a great case, hope help you: https://www.kaggle.com/imnitishng/classification-model-training-heng-s-code
good start. Is there a universal code clip? In my training, I want to save weights and hyperparameter files when accuracy is improved then in a new round of training the best weight/hyperparameter will be loaded. when I modified the model code the saved model files will be reset.
If you want obtain reproducible result, then I can recommend you use PyTorch + Catalyst instead Tensorflow + Keras.
Thank you very much. That's a pity. I wonder how this is handled in the work of scientific articles related to neural networks? Is there always a focus on ensuring that every result is exactly reproducible? Or should it not be enough just to repeat the training several times and calculate the mean and standard deviation?
Thank you Jeremy for your awesome work and fastai. I forked your notebook and tried to write to zip file but the limitation of 5Gb data output did not allow me to get full data on Kaggle kernel. I had to tweak some things : * use TIFF instead of PNG as I had a I;16 mode exception with Pillow that I suppose is due to the version of Pillow. * disable parallel processing :( because of use of ZipFile which does not pickle properly
I found exactly the same issues. Could you please share your fixed codes for both cases ? or could you please try running on the kernel ?
Hi , I am Apoorva Bapat from Boulder, Colorado. I have experience with Python, PHP, R. I have worked with a startup to get the company their Angel funding, competed and won Go Code Colorado 2019 and I currently work with an energy and utility company. I am looking for a team to join! message me on andbapat@gmail.com if interested!
Hi Apoorva, Interested in teaming up! let me know.
Hi guys, i am apoorve, English speaking from India, have experience with python/pandas/neural networks and new to kaggle and currently looking for teammate/s. for this competition. Contact me on apoo123rve@gmail.com for team-up :)
Hi Apoorve, Pratik from India. let me know if you're interested in teaming up!
Hi, I am a post-graduate student majoring in data science, and a beginner in kaggle. I want to use my knowledge to solve practical problems. I have some experience in Python/Pandas, and familiar with some basic models. I am from Chinese Mainland, and now study in hongkong. If anyone is interesting in working together ,and don't mind communication remotely ,my email is nanbei629@gmail.com. Thanks
Hi Ada, interested in teaming up! let me know
Hi I am a student at the University of Chicago and am looking for a team to work on this competition. I have some experience working with R, Python and large data sets.
Hi Nupur, interested in teaming up. let me know!
You need to set the seed of all the random initializations of the NN if you want to make it fully reproducible.
Not so difficult as you just need to set the general seeds. Check google.
Well I don't find your question stupid. I think those files are kind of confusing if not checked the note where cv10_.npy has microcalcifications whereas test10_.npy has mass cases. I think it should be fixed
I think when I created these test files I did not split the data properly, it was probably just split down the middle. I fixed this in later versions of the dataset and will fix it here when I have time.
Notebook runtime should be less than 1 hour.
This is a very big topic. You can ensemble less models, or optimize your algorithms.
If you want obtain reproducible result, then I can recommend you use PyTorch + Catalyst instead Tensorflow + Keras.
Obtain reproducible results on GPU with current Keras is impossible. I can recommend you watch this video for understanding this problem.
Notebook runtime should be less than 1 hour.
How can I do it?
nice job :) may I ask how many features you are training on roughly for each tree based and nn based models?
Its super clear now. thanks for your kind explanation :) I hope you keep up with what your are doing now!
I'm using Keras as well. One (of several) neat techniques I've learned over this very competition is how to implement early stopping. Once you do it, you can forget about number of epochs: one parameter less to worry about. Check this great notebook from !
Mine trains for 100 epochs :) Controlling the underfitting / overfitting is all part of the tuning, which is the hard part. Also more epochs isn't necessarily better...
Great staff. I really love it. I was wondering how do you handle a case when QB is not in the play?
in that case (which happened once or twice in over 23,000 plays) I just set the value to 50 (~half the field)
nice job :) may I ask how many features you are training on roughly for each tree based and nn based models?
No problem, there are no newbie questions... we are all learning :) In the pre-processing phase, my objective is to use all provided data to generate 1 row per play, with the best possible features. Let's say one of these features is the runner's speed. For that one, I will filter from within the 22 rows of that play, who is the runner, and extract his speed. Let's say another feature is the defense centroid, i.e. the average X-Y point of all players who are defending that play. In that case, I will filter from within the 22 rows, the 11 players in the defense, average the X and Y of all of them, and done: one more feature. Let's say my next feature idea is the distance from the closest defender to the runner. In that case, I will filter from within the 22 rows the position of the runner, and the position of all the 11 players in the defense. Next, I will calculate the Euclidean distance between the runner and each if the 11 players, and select the minimum. Done, another feature! Let's sophisticate now: let's say now I want for that specific play to have as features the # of players within 3, 6, 9, 12, 15 yards of distance from the runner. In that case, as I already have the distances from the runner to each of the 11 defense players, I will count how many of them are within each of these intervals, and return those. So on and so forth... does it make sense? :) Cheers!
Nice job - I agree with your guess that the data is USA cities. While a world wide organization now, data matching this has been collected in USA for last decade. Would recommend some changes however if your working on USA assumption you should go all the way! Days with high energy use possible. Day after Thanksgiving. Superbowl Sunday. Other Days with Low. Christmas eve. Any four day long weekend connected with holidays. Days where lots of folks still work and might not get much of an energy drop. Columbus Day Here in Pennsylvania over one million folks out in woods hunting deer on the Monday after Thanksgiving.
I plotted the average air temperature yearly groupby month. A lot of the yearly temperature can be found in US. check this out -> https://www.kaggle.com/c/ashrae-energy-prediction/discussion/113772#latest-654627
If you want obtain reproducible result, then I can recommend you use PyTorch + Catalyst instead Tensorflow + Keras.
Thanks for the tip. If I had known that a few weeks ago, then I would have had the time to get into PyTorch + Catalyst. I've worked very hard in Keras for my thesis and already written all codes in it, so it would probably be a great challenge now to switch to PyTorch + Catalyst. If it were still possible to train reproducibly with a GPU in Keras, that would probably be a huge workload reduction
efficientnet-b2 410x410 single fold hflip tta publicLB: 0.064
How long one epoch takes
Great kernel! Thank you for sharing... I have no experience to survival analysis but your kernel triggered me to start playing around with it... Have you tried to include more variables?
Thank you for your comments ! I' m trying to incorporate 2nd order correlation of player locations, but it is not going well ...
Thanks, but I think Series.dt properties are easier for cyclical datetime features https://pandas.pydata.org/pandas-docs/stable/reference/series.html#datetime-properties
Maybe :) But this approach allows you to implement features with a more cunning choice of the period, in contrast to just seconds, minutes, etc.
If you want obtain reproducible result, then I can recommend you use PyTorch + Catalyst instead Tensorflow + Keras.
For my mind, If you know, that you can't obtain reproducible results, then better represent results like mean±std.
Thanks for sharing! In your kernel you have load the models from unet_se_resnext50_32x4d = \ load('/kaggle/input/severstalmodels/unet_se_resnext50_32x4d.pth').cuda() unet_mobilenet2 = load('/kaggle/input/severstalmodels/unet_mobilenet2.pth').cuda() unet_resnet34 = load('/kaggle/input/severstalmodels/unet_resnet34.pth').cuda() Could you pls explain what is the procedure of tracing for and where exactly you trace the models in this inference kernel? Bacause i have problems with loading models and want to figure out how to load them correctly. Thank you!
I trained on Pytorch 1.3 and now I have problem loading my model :(
thanks for such a great course... In real world, I am dealing with 1000 time series data(sales data for 1000 unique products), and finding it hard to start exploring, because exploring properties of a couple of time series is doable, but i am unable to perform EDA for 1000 time series plots please guide me regarding this... it will be of great value
Hi Prathu, I face a similar issue. I am very new to Time Series analysis. My goal is to forecast sales for Store-city-medicine combination. There are 5000 such combinations. As of now I am hardcoding the values like Store number, City and medicine name and get the individual ARIMA values and use that fore individual forecasts. But I need a way to loop all the combinations in so that it can generate the forecasts and the MSE/RMSE values . The end goal is to see which combinations generate good forecast and which are the poor ones so that we can concentrate on the poor forecasts and gather more data in next quarter. I can only use Python.
Instead of going through all that trouble and errors just use : import os os.environ['KAGGLE_USERNAME'] = "xxxxxx" # username from the json file os.environ['KAGGLE_KEY'] = "xxxxxxxxxxxxxxxxxxxxxxxxxxxx" # key from the json file !kaggle datasets download -d iarunava/happy-house-dataset # api copied from kaggle
Thanks, That helped a lot
efficientnet-b2 410x410 single fold hflip tta publicLB: 0.064
4-5 hours
What were the individual LB scores of all 3 models??
resnet34 : 0.88791 mobilenet : 0.88163 se_resnext: 0.88592
Thanks for sharing! One question, what is llama encoding? I tried to google it but I couldn't find anything, is it another name for one hot encoding? Thanks again.
As far as I know, there is no such thing as llama encoding. I was just using it as a stand-in for "whatever exotic technique people are suggesting this week". That's why I included the next sentence: "(Yes, I made one of those up.)" My apologies for the confusion.
So what are the cities? or are you trying to crowd source it? ;) You should scrape a site with the averages per year, though. Or at least, average all 3 years in this dataset, otherwise you'll be comparing apples to oranges...
Inside my kernel I have a 3 years plot, it repeated pattern is obvious, average 3 years out, I'm expecting similar pattern. I will presume all station are in United State for now :D . Site_0 air temperature look similar to Tampa city Perhaps can combine with other metrics to further narrow down.
Maybe they just found a feature that is common to high values. If you found a feature that separates low values from high values, that would help your model.
If you found a feature that separates low values from high values, that would help your model. Thanks, I will try to find them!
Thanks K- engineers for fixing it. Mission accomplished. I was about to sing "Let it go , let it go"
Free yourself from the R wrappers and convert it to python while it's still time! lol jk
haha you got the point!! :)
I was eagerlyyyyyy waiting for this. Thank you doing this. Are you collecting the questions you are going to ask? If so, I would like add few things
> Background: How did you get into AI and how long have you been doing this? Your team-mates call you god...why is that? > You became GradMaster in 6 months; How would you describe that? > What would be your advice for amateurs with no prior experience in AI, who wants do well in Kaggle? > There is something about Chinese Kagglers/GM. You and other kagglers(mostly your team-mates) like Earhian, RUA, Yelan,etc are doing amazing work. What do you think about the success of Chinese kagglers? In your opinion, what is it that separates Chinese kagglers from the rest? > Is there something similar to Kaggle in China? What does Kaggle success mean to Chinese tech-companies? > Most of us join Kaggle with a goal to become GM; you have already achieved that. What's next? > Request: Please keep opensourcing your solution codes; they have been amazing source of learning for many beginners like me. Do you plan on releasing your code for Recursion?
Hi, I am a post-graduate student majoring in data science, and a beginner in kaggle. I want to use my knowledge to solve practical problems. I have some experience in Python/Pandas, and familiar with some basic models. I am from Chinese Mainland, and now study in hongkong. If anyone is interesting in working together ,and don't mind communication remotely ,my email is nanbei629@gmail.com. Thanks
Hi,Pratik, I am interested in working together with your team
What is the direction of the acceleration?
Given that the only direction we have in the data is Dir, we probably don't have any choice but to assume that the players moved approximately in a straight line during the last two time steps (~0.2 s)? (So maybe one could add the spread of the direction of acceleration as a hyperparameter to the model?)
Nope it doesn't work. I tried editing it but as soon as I hit refresh, it reverts back to the truncated state.
We weren't able to replicate what you're experiencing. Currently, there's a bug in the forums that, after editing then clicking on "Save Changes", then clicking on "Edit" again, it doesn't save anything that was updated. If this is the case, you will need to refresh the browser before clicking on the "Edit" button again.
Notebook runtime should be less than 1 hour.
Ok. I want to say that it had run successfully. Is that the same reason?
I love that GIF at the beginning <3
I tried to find a GIF that shows solving a Rubik cube is a little hard and boring! and then I did a computer program to simulate a Rubik cube. Thanks for checking my Kernel.
I could be wrong, but you can mostly just use the home team as a proxy for the stadium name. After going through all the cleanup, they should correlate 100% except for the exhibition-type games in London and Mexico City. Edit: just confirmed that's the case using Nikita's kernel. You should end up with 33 stadiums. 32 teams - 1 stadium shared by jets&giants - 1 shared by rams&chargers + 3 neutral locations(wembley, twickenham, azteca).
Things to add: According to the training data there are 34 stadiums (apparently LA rams & chargers have different stadiums: respectively Los Angeles Memorial Coliseum and StubHub Center). StubHub Center is actually the former name of the temporary Dignity Health Sports Park. From Wikipedia about Dignity Health Sports Park: The stadium became the temporary home of the Los Angeles Chargers beginning in 2017 – making it the smallest NFL stadium – until the completion of the SoFi Stadium in 2020, which they will then share with the Los Angeles Rams.
You need to attach the dataset to your notebook — the easiest is to create it from within the competition: click the "Notebooks" tab on any page in the competition — or just go here: https://www.kaggle.com/c/nfl-big-data-bowl-2020/notebooks — and click on "New Notebook" then the notebook should be set up properly. (And I think you might be able to run locally from within a Docker container on any the major OS:s, but I haven't tried it myself.)
Now it works, thanks Christoffer!
Name of columns that I can't handle when the code is not quoted: time.1 (dataset Researchers by field of R&D) median.1 ;;; upper.1 (dataset Mortality among children). In dataset Multinationals by industrial sector: year.1 power_code code reference period code (no underscore in this last one). There is a dot between column name+number. I tryed to rename the columns but I was not well suceeded. Thanks for answering Parul . I hope you can help me.
,could you give me the name of the dataset you are using or the link. I'll be able to help you better that way.
Hi Konstantin, I am able to run your code to begin training a model, every time I set the device to the tpus it takes more than a minute and I get this message: 2019-10-21 12:20:25.647590: E tensorflow/core/framework/op_kernel.cc:1579] OpKernel ('op: "ErfinvGrad" device_type: "CPU" constraint { name: "T" allowed_values { list { type: DT_DOUBLE } } }') for unknown op: ErfinvGrad 2019-10-21 12:20:25.647662: E tensorflow/core/framework/op_kernel.cc:1579] OpKernel ('op: "ErfinvGrad" device_type: "CPU" constraint { name: "T" allowed_values { list { type: DT_FLOAT } } }') for unknown op: ErfinvGrad 2019-10-21 12:20:25.647695: E tensorflow/core/framework/op_kernel.cc:1579] OpKernel ('op: "NdtriGrad" device_type: "CPU" constraint { name: "T" allowed_values { list { type: DT_DOUBLE } } }') for unknown op: NdtriGrad 2019-10-21 12:20:25.647722: E tensorflow/core/framework/op_kernel.cc:1579] OpKernel ('op: "NdtriGrad" device_type: "CPU" constraint { name: "T" allowed_values { list { type: DT_FLOAT } } }') for unknown op: NdtriGrad it still seems to work, but i could not find anything online about this specific message. Have you, or anyone else who has tried using pytorch on tpu seen this? I assumed it was some error as i saw it also when I tried to use pytorch_xla in the recursion comp after pulling the xla docker image, but there i could not get anything to begin training. Thanks again for the guide!
Yes, I get the same message, not sure how important is this. I assumed it's some op which does not have TPU support and would run on CPU. Regarding training, it's normal that there is no output for some time, especially at first run, because the models are compiled on the TPU. But within a few minutes there should be some output, and some CPU usage. If there is none, then something is not quite right.
transformers is suppose to be a list of tuples Change your prepro variable to this ``` prepro = ColumnTransformer(transformers=[ ('num', numeric_transformer, numeric_cols), ('cat', cate_transformer, cate_cols) ]) ```
Thanks I get It
Re-inventing the wheel and occupying bytes in Kaggle's servers...
Hahaha this competition is not efficient at all
Re-inventing the wheel and occupying bytes in Kaggle's servers...
??
I can understand fold =1,2,3,4,5 but what's the meaning of fold=0 in your code?
kfold mean.
Hello, I'm also seeing the same newsfeed for 15 days. Any trick to solve this? Thank you Jesús
Mine went back to being stale again last week.
I plan to model on the assumption that the sites are cities from USA - your find of 15 regions is nice match to 15 sites in our data - but to minimize the weather data they have to be cities - but one or more cities from each region would make some sense. I am going with USA assumption as I can find that multiple DOE surveys over the last decades have occurred and lots of summary data available with regards to building energy use in USA as a result. Have not seen anything similar in my google search for other parts of the world.
What about Canada? Any July 1st patterns?
Aren't there total 16 site_id in the dataset? Or am I missing something here?
I didn't check but if it starts at 0 and ends at 15 that would be 16 indeed!
I wrote about this earlier here. When you do not take into account the angle of the stadium with respect to Earth's parallels, then you actually do not know whether the wind is blowing against or in the back of players, so there is great chance, that you are only introducing noise to your data. There was also similar thread and it is allowed to hardcode angles of stadiums, so it may actually add value to your model, but I am sceptical about this.
Yep, you are right.
Very useful, thank you.
thanks