How to reduce predictors the right way for a logistic regression modelValidating a logistic regression for a specific $x$Logistic regression with sparse predictor variablesWhat represents the output of a logistic regression in RSequential classification methodsLogistic Regression: Does my model selection process make sense?Transformations for Logistic Regression PredictorsLogistic Regression Model building (dropping p-values)Maximum number of categorical predictors in multinomial (polytomous) logistic regressionHow to determine the best forecasting model for this type of data?Why are ROC curves and AUC values not always relevant?

Multi tool use
Why didn’t Eve recognize the little cockroach as a living organism?
Does the Crossbow Expert feat's extra crossbow attack work with the reaction attack from a Hunter ranger's Giant Killer feature?
ContourPlot — How do I color by contour curvature?
How many people need to be born every 8 years to sustain population?
What happens if I try to grapple an illusory duplicate from the Mirror Image spell?
If Captain Marvel (MCU) were to have a child with a human male, would the child be human or Kree?
Why the various definitions of the thin space ,?
Did I make a mistake by ccing email to boss to others?
Quoting Keynes in a lecture
What is the meaning of "You've never met a graph you didn't like?"
Typing CO_2 easily
Does Doodling or Improvising on the Piano Have Any Benefits?
Is there a reason to prefer HFS+ over APFS for disk images in High Sierra and/or Mojave?
How do I tell my boss that I'm quitting in 15 days (a colleague left this week)
Would this string work as string?
What should be the ideal length of sentences in a blog post for ease of reading?
Proving an identity involving cross products and coplanar vectors
Unable to disable Microsoft Store in domain environment
Origin of pigs as a species
Can I cause damage to electrical appliances by unplugging them when they are turned on?
What is the smallest number n> 5 so that 5 ^ n ends with "3125"?
How were servants to the Kaiser of Imperial Germany treated and where may I find more information on them
Why is the Sun approximated as a black body at ~ 5800 K?
Alignment of six matrices
How to reduce predictors the right way for a logistic regression model
Validating a logistic regression for a specific $x$Logistic regression with sparse predictor variablesWhat represents the output of a logistic regression in RSequential classification methodsLogistic Regression: Does my model selection process make sense?Transformations for Logistic Regression PredictorsLogistic Regression Model building (dropping p-values)Maximum number of categorical predictors in multinomial (polytomous) logistic regressionHow to determine the best forecasting model for this type of data?Why are ROC curves and AUC values not always relevant?
$begingroup$
So I have been reading some books (or parts of them) on modeling (F. Harrell's "Regression Modeling Strategies" among others), since my current situation right now is that I need to do a logistic model based on binary response data. I have both continuous, categorical, and binary data (predictors) in my data set. Basically I have around 100 predictors right now, which obviously is way too many for a good model. Also, many of these predictors are kind of related, since they are often based on the same metric, although a bit different.
Anyhow, what I have been reading, using univariate regression and step-wise techniques is some of the worst things you can do in order to reduce the amount of predictors. I think the LASSO technique is quite okay (if I understood that correctly), but obviously you just can't use that on 100 predictors and think any good will come of that.
So what are my options here ? Do I really just have to sit down, talk to all my supervisors, and smart people at work, and really think about what the top 5 best predictors could/should be (we might be wrong), or which approach(es) should I consider instead ?
And yes, I also know that this topic is heavily discussed (online and in books), but it sometimes seems a bit overwhelming when you are kind of new in this modeling field.
logistic predictive-models modeling predictor
$endgroup$
add a comment |
$begingroup$
So I have been reading some books (or parts of them) on modeling (F. Harrell's "Regression Modeling Strategies" among others), since my current situation right now is that I need to do a logistic model based on binary response data. I have both continuous, categorical, and binary data (predictors) in my data set. Basically I have around 100 predictors right now, which obviously is way too many for a good model. Also, many of these predictors are kind of related, since they are often based on the same metric, although a bit different.
Anyhow, what I have been reading, using univariate regression and step-wise techniques is some of the worst things you can do in order to reduce the amount of predictors. I think the LASSO technique is quite okay (if I understood that correctly), but obviously you just can't use that on 100 predictors and think any good will come of that.
So what are my options here ? Do I really just have to sit down, talk to all my supervisors, and smart people at work, and really think about what the top 5 best predictors could/should be (we might be wrong), or which approach(es) should I consider instead ?
And yes, I also know that this topic is heavily discussed (online and in books), but it sometimes seems a bit overwhelming when you are kind of new in this modeling field.
logistic predictive-models modeling predictor
$endgroup$
add a comment |
$begingroup$
So I have been reading some books (or parts of them) on modeling (F. Harrell's "Regression Modeling Strategies" among others), since my current situation right now is that I need to do a logistic model based on binary response data. I have both continuous, categorical, and binary data (predictors) in my data set. Basically I have around 100 predictors right now, which obviously is way too many for a good model. Also, many of these predictors are kind of related, since they are often based on the same metric, although a bit different.
Anyhow, what I have been reading, using univariate regression and step-wise techniques is some of the worst things you can do in order to reduce the amount of predictors. I think the LASSO technique is quite okay (if I understood that correctly), but obviously you just can't use that on 100 predictors and think any good will come of that.
So what are my options here ? Do I really just have to sit down, talk to all my supervisors, and smart people at work, and really think about what the top 5 best predictors could/should be (we might be wrong), or which approach(es) should I consider instead ?
And yes, I also know that this topic is heavily discussed (online and in books), but it sometimes seems a bit overwhelming when you are kind of new in this modeling field.
logistic predictive-models modeling predictor
$endgroup$
So I have been reading some books (or parts of them) on modeling (F. Harrell's "Regression Modeling Strategies" among others), since my current situation right now is that I need to do a logistic model based on binary response data. I have both continuous, categorical, and binary data (predictors) in my data set. Basically I have around 100 predictors right now, which obviously is way too many for a good model. Also, many of these predictors are kind of related, since they are often based on the same metric, although a bit different.
Anyhow, what I have been reading, using univariate regression and step-wise techniques is some of the worst things you can do in order to reduce the amount of predictors. I think the LASSO technique is quite okay (if I understood that correctly), but obviously you just can't use that on 100 predictors and think any good will come of that.
So what are my options here ? Do I really just have to sit down, talk to all my supervisors, and smart people at work, and really think about what the top 5 best predictors could/should be (we might be wrong), or which approach(es) should I consider instead ?
And yes, I also know that this topic is heavily discussed (online and in books), but it sometimes seems a bit overwhelming when you are kind of new in this modeling field.
logistic predictive-models modeling predictor
logistic predictive-models modeling predictor
edited 1 hour ago
Ben Bolker
23.4k16393
23.4k16393
asked 1 hour ago
Denver DangDenver Dang
226110
226110
add a comment |
add a comment |
2 Answers
2
active
oldest
votes
$begingroup$
+1 for "sometimes seems a bit overwhelming". It really depends (as Harrell clearly states; see the section at the end of Chapter 4) whether you want to do
confirmatory analysis ($to$ reduce your predictor complexity to a reasonable level without looking at the responses, by PCA or subject-area considerations or ...)predictive analysis ($to$ use appropriate penalization methods). Lasso could very well work OK with 100 predictors, if you have a reasonably large sample. Feature selection will be unstable, but that's OK if all you care about is prediction. I have a personal preference for ridge-like approaches that don't technically "select features" (because they never reduce any parameter to exactly zero), but whatever works ...
You'll have to use cross-validation to choose the degree of penalization, which will destroy your ability to do inference (construct confidence intervals on predictions) unless you use cutting-edge high-dimensional inference methods (e.g. Dezeure et al 2015; I have not tried these approaches but they seem sensible ...)
exploratory analysis: have fun, be transparent and honest, don't quote any p-values.
$endgroup$
add a comment |
$begingroup$
There are many different approaches. What I would recommend is trying some simple ones, in the following order:
- L1 regularization (with increasing penalty; the larger the regularization coefficient, the more features will be eliminated)
- Recursive Feature Elimination (https://scikit-learn.org/stable/modules/feature_selection.html#recursive-feature-elimination) -- removes features incrementally by eliminating the features associated with the smallest model coefficients (assuming that those are the least important once; obviously, it's very crucial here to normalize the input features)
- Sequential Feature Selection (http://rasbt.github.io/mlxtend/user_guide/feature_selection/SequentialFeatureSelector/) -- removes features based on how important they are for predictive performance
New contributor
resnet is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "65"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f398638%2fhow-to-reduce-predictors-the-right-way-for-a-logistic-regression-model%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
+1 for "sometimes seems a bit overwhelming". It really depends (as Harrell clearly states; see the section at the end of Chapter 4) whether you want to do
confirmatory analysis ($to$ reduce your predictor complexity to a reasonable level without looking at the responses, by PCA or subject-area considerations or ...)predictive analysis ($to$ use appropriate penalization methods). Lasso could very well work OK with 100 predictors, if you have a reasonably large sample. Feature selection will be unstable, but that's OK if all you care about is prediction. I have a personal preference for ridge-like approaches that don't technically "select features" (because they never reduce any parameter to exactly zero), but whatever works ...
You'll have to use cross-validation to choose the degree of penalization, which will destroy your ability to do inference (construct confidence intervals on predictions) unless you use cutting-edge high-dimensional inference methods (e.g. Dezeure et al 2015; I have not tried these approaches but they seem sensible ...)
exploratory analysis: have fun, be transparent and honest, don't quote any p-values.
$endgroup$
add a comment |
$begingroup$
+1 for "sometimes seems a bit overwhelming". It really depends (as Harrell clearly states; see the section at the end of Chapter 4) whether you want to do
confirmatory analysis ($to$ reduce your predictor complexity to a reasonable level without looking at the responses, by PCA or subject-area considerations or ...)predictive analysis ($to$ use appropriate penalization methods). Lasso could very well work OK with 100 predictors, if you have a reasonably large sample. Feature selection will be unstable, but that's OK if all you care about is prediction. I have a personal preference for ridge-like approaches that don't technically "select features" (because they never reduce any parameter to exactly zero), but whatever works ...
You'll have to use cross-validation to choose the degree of penalization, which will destroy your ability to do inference (construct confidence intervals on predictions) unless you use cutting-edge high-dimensional inference methods (e.g. Dezeure et al 2015; I have not tried these approaches but they seem sensible ...)
exploratory analysis: have fun, be transparent and honest, don't quote any p-values.
$endgroup$
add a comment |
$begingroup$
+1 for "sometimes seems a bit overwhelming". It really depends (as Harrell clearly states; see the section at the end of Chapter 4) whether you want to do
confirmatory analysis ($to$ reduce your predictor complexity to a reasonable level without looking at the responses, by PCA or subject-area considerations or ...)predictive analysis ($to$ use appropriate penalization methods). Lasso could very well work OK with 100 predictors, if you have a reasonably large sample. Feature selection will be unstable, but that's OK if all you care about is prediction. I have a personal preference for ridge-like approaches that don't technically "select features" (because they never reduce any parameter to exactly zero), but whatever works ...
You'll have to use cross-validation to choose the degree of penalization, which will destroy your ability to do inference (construct confidence intervals on predictions) unless you use cutting-edge high-dimensional inference methods (e.g. Dezeure et al 2015; I have not tried these approaches but they seem sensible ...)
exploratory analysis: have fun, be transparent and honest, don't quote any p-values.
$endgroup$
+1 for "sometimes seems a bit overwhelming". It really depends (as Harrell clearly states; see the section at the end of Chapter 4) whether you want to do
confirmatory analysis ($to$ reduce your predictor complexity to a reasonable level without looking at the responses, by PCA or subject-area considerations or ...)predictive analysis ($to$ use appropriate penalization methods). Lasso could very well work OK with 100 predictors, if you have a reasonably large sample. Feature selection will be unstable, but that's OK if all you care about is prediction. I have a personal preference for ridge-like approaches that don't technically "select features" (because they never reduce any parameter to exactly zero), but whatever works ...
You'll have to use cross-validation to choose the degree of penalization, which will destroy your ability to do inference (construct confidence intervals on predictions) unless you use cutting-edge high-dimensional inference methods (e.g. Dezeure et al 2015; I have not tried these approaches but they seem sensible ...)
exploratory analysis: have fun, be transparent and honest, don't quote any p-values.
answered 1 hour ago
Ben BolkerBen Bolker
23.4k16393
23.4k16393
add a comment |
add a comment |
$begingroup$
There are many different approaches. What I would recommend is trying some simple ones, in the following order:
- L1 regularization (with increasing penalty; the larger the regularization coefficient, the more features will be eliminated)
- Recursive Feature Elimination (https://scikit-learn.org/stable/modules/feature_selection.html#recursive-feature-elimination) -- removes features incrementally by eliminating the features associated with the smallest model coefficients (assuming that those are the least important once; obviously, it's very crucial here to normalize the input features)
- Sequential Feature Selection (http://rasbt.github.io/mlxtend/user_guide/feature_selection/SequentialFeatureSelector/) -- removes features based on how important they are for predictive performance
New contributor
resnet is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
$endgroup$
add a comment |
$begingroup$
There are many different approaches. What I would recommend is trying some simple ones, in the following order:
- L1 regularization (with increasing penalty; the larger the regularization coefficient, the more features will be eliminated)
- Recursive Feature Elimination (https://scikit-learn.org/stable/modules/feature_selection.html#recursive-feature-elimination) -- removes features incrementally by eliminating the features associated with the smallest model coefficients (assuming that those are the least important once; obviously, it's very crucial here to normalize the input features)
- Sequential Feature Selection (http://rasbt.github.io/mlxtend/user_guide/feature_selection/SequentialFeatureSelector/) -- removes features based on how important they are for predictive performance
New contributor
resnet is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
$endgroup$
add a comment |
$begingroup$
There are many different approaches. What I would recommend is trying some simple ones, in the following order:
- L1 regularization (with increasing penalty; the larger the regularization coefficient, the more features will be eliminated)
- Recursive Feature Elimination (https://scikit-learn.org/stable/modules/feature_selection.html#recursive-feature-elimination) -- removes features incrementally by eliminating the features associated with the smallest model coefficients (assuming that those are the least important once; obviously, it's very crucial here to normalize the input features)
- Sequential Feature Selection (http://rasbt.github.io/mlxtend/user_guide/feature_selection/SequentialFeatureSelector/) -- removes features based on how important they are for predictive performance
New contributor
resnet is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
$endgroup$
There are many different approaches. What I would recommend is trying some simple ones, in the following order:
- L1 regularization (with increasing penalty; the larger the regularization coefficient, the more features will be eliminated)
- Recursive Feature Elimination (https://scikit-learn.org/stable/modules/feature_selection.html#recursive-feature-elimination) -- removes features incrementally by eliminating the features associated with the smallest model coefficients (assuming that those are the least important once; obviously, it's very crucial here to normalize the input features)
- Sequential Feature Selection (http://rasbt.github.io/mlxtend/user_guide/feature_selection/SequentialFeatureSelector/) -- removes features based on how important they are for predictive performance
New contributor
resnet is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
New contributor
resnet is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
answered 1 hour ago
resnetresnet
1594
1594
New contributor
resnet is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
New contributor
resnet is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
resnet is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
add a comment |
add a comment |
Thanks for contributing an answer to Cross Validated!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f398638%2fhow-to-reduce-predictors-the-right-way-for-a-logistic-regression-model%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
gczENFzGKlORq,FWfTDjuNYJtA5oH,Lvz5wVorZtc0rhEeIt32t6L,7NqGD r g1xT,tS6C ClgAgiB