LSDPO - R244
review_log
Open Source Projects
Reading Club papers
Contact
|
Additional Paper List for ML based Optimisation
Additional papers on optimisation in Computer Systems for extended reading. When you get free time, pick any interesting paper to read!
A1. G. Venkates et al.:
Accelerating Deep
Convolutional Networks using low-precision and sparsity, ICASSP, 2017.
A2. V. Mnih et al.:
Asynchronous Methods for
Deep Reinforcement Learning, ICML, 2016.
A3. B. Bodin, L.
Nardi, MZ Zia et al.: Integrating
Algorithmic Parameters into Benchmarking and Design Space Exploration in 3D
Scene Understanding, PACT, 2016.
A4. V. Mnih et al.:
Playing Atari with Deep Reinforcement
Learning, NIPS, 2013.
A5.. S. Palkar, J. Thomas, A.
Shanbhagy, D. Narayanan, H. Pirky, M. Schwarzkopfy, S. Amarasinghey, and M.
Zaharia:
Weld: A Common
Runtime for High Performance
Data Analytics, CIDR, 2017.
A6. D. Kingma, J. Ba:
Adam:
A Method for Stochastic Optimization,
ICLR, 2015.
A7. Z. Jia, S. Lin, R. Ying, J. You,
J. Leskovec, A. Aiken: Redundancy-Free
Computation Graphs for Graph Neural Networks:
ArXiv, 2019.
A8. N. K. Ahmed, et al.: On Sampling from Massive Graph Streams, VLDB, 2017.
A9.
G. Malkomes, B. Cheng, E. Hans Lee, and M. McCourt:
Beyond the Pareto Efficient Frontier: Constraint Active Search for Multi-objective Experimental Design, PLMR, 2021.
A10.
J. Maronas, O. Hamelijnck, J. Knoblauch, and T. Damoula:
Transforming Gaussian Processes With Normalizing Flows, , AISTATS, 2021.
A11. F. Yang et al.:
LFTF: A Framework for
Efficient Tensor Analytics at Scale, VLDB, 2017.
A12. H. Mao et al.:
Neural Adaptive Video Streaming with
Pensieve, SIGCOMM, 2017.
A13. K. LaCurts et al.:
Cicada: Introducing
Predictive Guarantees for Cloud Networks, HOTCLOUD, 2014.
A14. H. Hoffmann et
al.: Dynamic Knobs for
Responsive Power-Aware Computing, Asplos, 2011.
A15. N.J. Yadwadkar, B.
Hariharan, J. Gonzalez and R. Katz:
Faster Jobs in Distributed
Data Processing using Multi-Task Learning, SDM, 2015.
A16. X. Dutreih et al.:
Using Reinforcement
Learning for Autonomic Resource Allocation in Clouds: Towards a Fully Automated
Workflow, ICAS, 2011.
A17. J.
Eastep et
al.: Smart Data
Structures: An Online Machine Learning Approach to Multicore Data Structures,
ICAC, 2011.
A18. E.
Ipek et al.: Self-Optimizing Memory
Controllers: A Reinforcement Learning Approach, ISCA, 2008.
A19. S.
Teerapittayanon et al.:
Distributed Deep Neural Networks over the Cloud, the Edge and End Devices,
ICDCS, 2017.
A20. D. Baylor et al.:
TFX: A TensorFlow-Based Production-Scale
Machine Learning Platform, KDD, 2017.
A21. H. Mao et al.:
Resource Management with
Deep Reinforcement Learning, HotNets, 2016.
A22. M. Raghu et al.:
On the Expressive Power
of Deep Neural Networks, PMLR, 2017.
A23. E. Lambart et al.:
Low Level Control of a Quadrotor with
Deep Model-Based Reinforcement Learning, IEEE Robotics and Automation
Letters, 2019.
A24. Y. Kang et al.:
Neurosurgeon: Collaborative Intelligence
Between the Cloud and Mobile Edge, ASPLOS, 2017.
A25. Y. You et al.:
Scaling Deep Learning on
GPU and Knights Landing clusters, SC, 2017.
A26. Kunjir, M. and Babu, S.:
Black or White? How to Develop an AutoTuner for Memory-based Analytics, SIGMOD, 2020.
A27. K. Tzoumas, A. Deshpande, and C. S. Jensen:
Efficiently adapting graphical models for selectivity estimation, VLDB, 2013.
A28. L. Spiegelberg, R. Yesantharao, M. Schwarzkopf, T. Kraska:
Tuplex: Data Science in Python at Native Code Speed, SIGMOD, 2021., SIGMOD, 2021.
A29. U. MisraRichard, L. Dunlap et al.:
RubberBand: Cloud-based Hyperparameter Tuning, EuroSys, 2021.
A30. F. Hutter, H.H. Hoos, and K. Leyton-Brown:
Sequential model-based optimization for general algorithm configuration, International conference on learning and intelligent optimization, 2011.
A31. J. Bergstra, Y. Bengio:
Random search for hyper-parameter optimization, Journal of machine learning research, 2012.
A32. B. Teabe et al.:
Application-specific quantum for
multi-core platform scheduler, EuroSys, 2016.
A33.
L. Ma, W. Zhang, J. Jiao, W. Wang, M. Butrovich, W.S. Lim, P. Menon, and A. Pavlo:
MB2: Decomposed Behavior Modeling for Self-Driving Database Management Systems, SIGMOD, 2021.
A34.
R. Krishna, M.S. Iqbal, M.A. Javidian, B. Ray, and P. Jamshidi:
CADET: Debugging and Fixing Misconfigurations using Counterfactual Reasoning, UAI, 2021.
A35. I. Gog, M. Schwarzkopf, A. Gleave, R.
Watson, S. Hand: Firmament: fast, centralized cluster scheduling at scale, OSDI,
2016.
A36. B. Zoph et al.:
Learning Transferable Architectures for
Scalable Image Recognition, arXiv, 2017.
A37. D. Golovin et al.:
Google Vizier: A Service for Black-Box
Optimization, KDD, 2017.
A38. M.
Carvalho et
al.: Long-term SLOs for
reclaimed cloud computing resources, SOCC, 2014.
A39. A. Ratner, S. Bach, H.
Ehrenberg, J. Fries, S. Wu, and C. Ré:
Snorkel: Rapid Training
Data Creation with Weak Supervision, VLDB, 2017.
A40. A. Ratner, B. Hancock, J.
Dunnmon, R. Goldman, and C. Ré:
Snorkel MeTaL: Weak
Supervision for Multi-Task Learning, DEEM, 2018.
A41. A. Koliousis, P.
Watcharapichat, M. Weidlich, L. Mai, P. Costa, P. Pietzuch:
CROSSBOW: Scaling Deep Learning
with Small Batch Sizes on MultiGPU Servers, VLDB,
2019.
A42. Ł. Kaiser et al.:
Model Based Reinforcement Learning for
Atari, arXiv, 2019.
A43. H. Liu, K.
Simonyan, and Y. Yang:
DARTS: Differentiable Architecture Search, arXiv, 2018.
A44.. M.
Jaderberg, V. Dalibard, S.
Osindero, W.M. Czarnecki:
Population based
training of neural networks, arXiv, 2017.
A45.
L. Li te al.:
A System for Massively Parallel Hyperparameter Tuning, MLSys, 2020.
A46. H. Dai, E. Khalil, Y. Zhang,
B. Dilkina, L. Song: Learning
Combinatorial Optimization Algorithms over Graphs, NIPS, 2017.
A47. J. Dean, G. Corrado, R. Monga, K. Chen, M. Devin, Q. Le, M. Mao, M. Ranzato, A. Senior, P. Tucker, K. Yang, A. Ng.:
Large scale distributed
deep networks. NIPS, 2012.
A48.
F. Hutter, et al.: An evaluation of sequential model-based optimization for expensive blackbox functions,
GECCO, 2013.
A49. Chu, C. et al.:
Probability Functional Descent: A Unifying Perspective on GANs, Variational Inference, and Reinforcement Learning,
PMLR, 2019.
A50. N. Goodman, V. Mansinghka, D. Roy, K.
onawitz, J. Tenenbaum: Church: a language
for generative models. In Proceedings of the Conference on Uncertainty in Arti
cial Intelligence, UAI, 2008.
A51.
T. Rainforth et al.:
Bayesian Optimization for Probabilistic Programs, NIPS, 2016.
A52. G. Tesauro et al.:
A Hybrid Reinforcement Learning Approach
to Autonomic Resource Allocation, ICAC, 2006.
A53.T. Domhan, J. T. Springenberg, F.
Hutter: Speeding up automatic
hyperparameter optimization of deep neural networks by extrapolation of learning
curves, IJCAI, 2015.
A54. F. Hutter et
al.: Algorithm runtime
prediction: Methods&evaluation, Elsevier J. AI, 2014.
A55.. S. Palkar, J. Thomas, D.
Narayanan, P. Thaker, R. Palamuttam, P. Negi, A. Shanbhag, M. Schwarzkopf, H.
Pirk, S. Amarasinghe, S. Madden, M. Zaharia:
Evaluating End-to-End
Optimization for Data Analytics Applications in Weld, VLDB, 2018.
A56. H. Zhang et al.: Poseidon: An Efficient Communication Architecture for Distributed Deep Learning on GPU Clusters, ATC, 2017.
A57. S. Venkataraman et al.:
Ernest: Efficient Performance
Prediction for Large-Scale Advanced Analytics, NSDI, 2016.
A58. H. Mao, M.
Schwarzkopf, S. B. Venkatakrishnan, Z. Meng, M. Alizadeh:
Learning Scheduling
Algorithms for Data Processing Clusters,
SIGCOMM, 2019.
A59. C. Delimitrou et al.:
Quasar: Resource-Efficient and
QoS-Aware Cluster Management, ASPLOS, 2014.
|