|
1 | | -# Quantum Machine Learning for Strategic Decision Making |
| 1 | +# Machine Learning & Reinforcement Learning Algorithms |
2 | 2 |
|
3 | | -This repository is currently under developement for implementation of quantum machine learning on strategic decision making for optimal output |
| 3 | + |
| 4 | + |
| 5 | + |
4 | 6 |
|
5 | | -## Stage 1 — Learn Only the Needed Quantum Basics |
| 7 | +This repository contains a collection of Jupyter notebooks implementing fundamental machine learning, decision-making, and reinforcement learning algorithms. |
| 8 | +Each notebook focuses on a specific concept, combining theory with practical implementation for educational and experimental purposes. |
6 | 9 |
|
7 | | -- **Goal**: Understand enough quantum computing to implement QML. |
| 10 | +--- |
8 | 11 |
|
9 | | - *Day 1–2*: Minimum QC Theory |
| 12 | +## 📂 Repository Structure |
10 | 13 |
|
11 | | - Everyone learns: |
| 14 | +code |
| 15 | + ├── Research Models |
| 16 | + │ ├── multi-arm-bandit.ipynb |
| 17 | + │ ├── q-learning.ipynb |
| 18 | + │ ├── svm.ipynb |
| 19 | + ├── actor-critic.ipynb |
| 20 | + ├── bayesian-decision-making.ipynb |
| 21 | + ├── logistic-regression.ipynb |
| 22 | + ├── multi-arm-bandit.ipynb |
| 23 | + ├── neural-network.ipynb |
| 24 | + ├── policy-gradient.ipynb |
| 25 | + ├── q-learning.ipynb |
| 26 | + └── svm.ipynb |
12 | 27 |
|
13 | | - - Qubits & superposition |
14 | | - - Basic gates (X, Y, Z, H, CNOT, Rotation gates) |
15 | | - - Measurement |
16 | | - - Quantum circuits |
| 28 | +--- |
17 | 29 |
|
18 | | - Use: |
19 | | - - Qiskit textbook (free) |
20 | | - - PennyLane “Intro to Quantum” tutorials |
| 30 | +## Research Paper |
21 | 31 |
|
22 | | - *Day 3–4*: What is QML? |
| 32 | +The following algorithms are specifically used in the research paper: |
23 | 33 |
|
24 | | - Learn: |
25 | | - - Angle embedding |
26 | | - - Amplitude embedding |
27 | | - - Variational Quantum Circuits (VQC) |
28 | | - - Quantum Neural Networks (QNNs) |
29 | | - - Hybrid quantum-classical training |
| 34 | +1. [Support Vector Machine (SVM)](https://github.com/TechMLW/QuantFP/blob/master/code/svm.ipynb) |
| 35 | +2. [Multi-Armed Bandit](https://github.com/TechMLW/QuantFP/blob/master/code/multi-arm-bandit.ipynb) |
| 36 | +3. [Q-Learning](https://github.com/TechMLW/QuantFP/blob/master/code/q-learning.ipynb) |
30 | 37 |
|
31 | | - Watch: |
32 | | - - Xanadu PennyLane QML crash course |
33 | | - - Qiskit ML playlist |
34 | | - |
35 | | - *Day 5–7*: Divide Topics for Literature Review |
36 | | - Each member summarizes 3–4 papers. |
37 | | - |
38 | | - Topics: |
39 | | - - QML basics |
40 | | - - VQC for classification |
41 | | - - QRL (Quantum Reinforcement Learning) |
42 | | - - Quantum advantage claims |
43 | | - - Applications in strategy: finance, optimization, multi-agent systems |
44 | | - |
45 | | - Deliverables by end of Week 1: |
46 | | - ✔ Literature survey |
47 | | - ✔ Understand VQC models |
48 | | - ✔ Choose QML framework: PennyLane (recommended) |
49 | | - |
50 | | - *Week 2* — Build QML Foundations |
51 | | - |
52 | | -- **Goal**: Implement small QML models so your team becomes comfortable. |
53 | | - |
54 | | - Choose one framework: |
55 | | - - PennyLane + PyTorch (easiest) |
56 | | - - Qiskit Machine Learning |
57 | | - |
58 | | - *Day 8–10*: Everyone Implements a Simple QML Classifier |
59 | | - |
60 | | - Example tutorial to follow: |
61 | | - - PennyLane “Variational Classifier” |
62 | | - - Build model on Iris dataset |
63 | | - - Team Member B leads. |
64 | | - |
65 | | - *Day 11–13*: Play With More Complex QML Models |
66 | | - |
67 | | - Optional models: |
68 | | - - Quantum Kernel-based SVM |
69 | | - - QAOA for optimization |
70 | | - - VQC for regression |
71 | | - |
72 | | - *Day 14*: Decide the Final Experiment Setup |
73 | | - |
74 | | - Pick ONE for your research experiment: |
75 | | - - Option A: QML for Game-Theoretic Decisions |
76 | | - - Predict opponent action |
77 | | - - Quantum classifier for rock-paper-scissors strategy |
78 | | - - Option B: QML for Multi-Armed Bandit Decisions (Best choice) |
79 | | - - Use a VQC to predict best arm |
80 | | - - Compare with classical models |
81 | | - - wShow regret curves |
82 | | - - Option C: QML for Supply Chain / Pricing decision |
83 | | - - Classify optimal pricing or inventory action |
84 | | - - Team D finalizes choice and dataset. |
85 | | - |
86 | | - Deliverables by end of Week 2: |
87 | | - ✔ Working QML models |
88 | | - ✔ Classical ML baseline models ready |
89 | | - ✔ Final idea + dataset/environment chosen |
90 | | - |
91 | | - *Week 3* — Implement the Strategic Decision-Making Model |
92 | | - |
93 | | -- **Goal**: Apply QML to your decision-making task. |
94 | | - |
95 | | - *Day 15–17*: Build Problem Environment |
96 | | - |
97 | | - Examples: |
98 | | - - If multi-armed bandit: |
99 | | - - Simulate reward distributions |
100 | | - - Encode context as angles |
101 | | - - Use QML model to choose arm |
102 | | - |
103 | | - If game theory: |
104 | | - - Encode opponent behavior vectors |
105 | | - - Train QML model to classify best response |
106 | | - |
107 | | - *Day 18–20*: Train & Evaluate QML Model |
108 | | - |
109 | | - Train with gradient descent |
110 | | - |
111 | | - Use 4–8 qubits only (for simulators) |
112 | | - Track: |
113 | | - - accuracy |
114 | | - - reward |
115 | | - - regret |
116 | | - - stability |
117 | | - |
118 | | - *Day 21*: Compare with Classical Models |
119 | | - |
120 | | - Team C leads baseline models: |
121 | | - - Logistic regression |
122 | | - - Shallow neural network |
123 | | - - SVM |
124 | | - |
125 | | - Make comparison plots: |
126 | | - - Accuracy vs iterations |
127 | | - - Reward vs episodes |
128 | | - - Decision quality |
129 | | - |
130 | | - Deliverables by end of Week 3: |
131 | | - ✔ Fully trained QML and classical models |
132 | | - ✔ Performance comparison |
133 | | - ✔ Preliminary results |
| 38 | +--- |
| 39 | + |
| 40 | +## 📘 Notebook Descriptions |
| 41 | + |
| 42 | +1. [Logistic Regression](https://github.com/TechMLW/QuantFP/blob/master/code/logistic-regression.ipynb) |
| 43 | + - Binary classification using logistic regression |
| 44 | + - Model formulation, training, and evaluation |
| 45 | + -Gradient-based optimization |
| 46 | + |
| 47 | +2. [Support Vector Machine (SVM)](https://github.com/TechMLW/QuantFP/blob/master/code/svm.ipynb) |
| 48 | + - Linear and margin-based classification |
| 49 | + - Decision boundaries and hinge loss |
| 50 | + - Practical implementation from scratch |
| 51 | + |
| 52 | +3. [Neural Network](https://github.com/TechMLW/QuantFP/blob/master/code/neural-network.ipynb) |
| 53 | + - Feedforward neural network implementation |
| 54 | + - Activation functions and backpropagation |
| 55 | + - Training and inference workflow |
| 56 | + |
| 57 | +4. [Bayesian Decision Making](https://github.com/TechMLW/QuantFP/blob/master/code/bayesian-decision-making.ipynb) |
| 58 | + - Probabilistic reasoning under uncertainty |
| 59 | + - Bayesian inference and decision rules |
| 60 | + - Applications to optimal decision policies |
| 61 | + |
| 62 | +5. [Multi-Armed Bandit](https://github.com/TechMLW/QuantFP/blob/master/code/multi-arm-bandit.ipynb) |
| 63 | + - Exploration vs. exploitation trade-off |
| 64 | + - ε-greedy and related strategies |
| 65 | + - Performance comparison of bandit algorithms |
| 66 | + |
| 67 | +6. [Q-Learning](https://github.com/TechMLW/QuantFP/blob/master/code/q-learning.ipynb) |
| 68 | + - Model-free reinforcement learning |
| 69 | + - Q-table updates and temporal-difference learning |
| 70 | + - Policy derivation from learned values |
| 71 | + |
| 72 | +7. [Policy Gradient](https://github.com/TechMLW/QuantFP/blob/master/code/policy-gradient.ipynb) |
| 73 | + - Direct policy optimization methods |
| 74 | + - Stochastic policies and gradient estimation |
| 75 | + - Reinforcement learning with function approximation |
| 76 | + |
| 77 | +8. [Actor-Critic](https://github.com/TechMLW/QuantFP/blob/master/code/actor-critic.ipynb) |
| 78 | + - Hybrid value-based and policy-based approach |
| 79 | + - Actor and critic architecture |
| 80 | + - Advantage estimation and learning stability |
| 81 | + |
| 82 | +--- |
| 83 | + |
| 84 | +## 🛠️ Requirements |
| 85 | + |
| 86 | +To run the notebooks, ensure the following are installed: |
| 87 | + |
| 88 | +- Python 3.8+ |
| 89 | +- Jupyter Notebook / JupyterLab |
| 90 | +- NumPy |
| 91 | +- Pandas |
| 92 | +- Matplotlib |
| 93 | +- (Optional) SciPy, scikit-learn |
| 94 | + |
| 95 | +Install dependencies using: |
| 96 | + |
| 97 | +```bash |
| 98 | + pip install numpy matplotlib scikit-learn jupyter |
| 99 | +``` |
| 100 | + |
| 101 | +--- |
| 102 | + |
| 103 | +## ▶️ How to Run |
| 104 | + |
| 105 | +1. Clone the repository: |
| 106 | + |
| 107 | + ```bash |
| 108 | + git clone https://github.com/TechMLW/QuantFP |
| 109 | + cd QuantFP |
| 110 | + ``` |
| 111 | + |
| 112 | +2. Launch Jupyter Notebook: |
| 113 | + |
| 114 | + ```bash |
| 115 | + jupyter notebook |
| 116 | + ``` |
| 117 | + |
| 118 | +3. Open any .ipynb file and run the cells sequentially. |
| 119 | + |
| 120 | +--- |
| 121 | + |
| 122 | +## 🎯 Purpose |
| 123 | + |
| 124 | +This repository is intended for: |
| 125 | + |
| 126 | +- Learning core machine learning and reinforcement learning algorithms |
| 127 | +- Academic coursework and self-study |
| 128 | +- Experimentation with algorithmic concepts from scratch |
| 129 | + |
| 130 | +--- |
0 commit comments