Quick Check 20.3 – Data 100, Summer 2020
登入 Google 即可儲存進度。瞭解詳情
(Fall 2019 Final) Suppose we are trying to train a decision tree model for a binary classification task. We denote the two classes as 0 (the negative class) and 1 (the positive class) respectively. Our input data consists of 6 sample points and 2 features x1 and x2. The data is given in the table below, and is also plotted for your convenience on the right.
 What is the entropy at the root of the tree? Round your answer to the nearest hundredth. *
Suppose we split the root note with a rule of the form xi ≥ β, where i could be either 1 or 2. Which of the following rules minimizes the weighted entropy of the two resulting child nodes? *
提交
清除表單
請勿利用 Google 表單送出密碼。
這份表單是在 UC Berkeley 中建立。 檢舉濫用情形