Skip to main navigation menu Skip to main content Skip to site footer

A State of Art: Survey for Concurrent Computation and Clustering of Parallel Computing for Distributed Systems

Abstract

In this paper, several works have been presented related to clustering parallel computing for distributed systems. The trend of the paper is to focus on the strengths of previous works in this field towards enhancing the performance of distributed systems. This concentration was conducted by presenting several techniques, each of which has weak and strong features. The most challenging points for all techniques vary from increasing the performance of the system to time-responding to overcome overhead running of the system. For a more specific approach to addressing concurrent computation besides parallel computing classifications for distributed systems, this paper relies on a comprehensive feature study and comparison between SYNC and ASYNC modes.

Keywords

Distributed Computing, Distributed Systems, Clustering System, Parallel Systems

PDF

References

  1. R. R. Zebari, S. R. Zeebaree, and K. Jacksi, “Impact Analysis of HTTP and SYN Flood DDoS Attacks on Apache 2 and IIS 10.0 Web Servers,” in 2018 International Conference on Advanced Science and Engineering (ICOASE), 2018, pp. 156–161.
  2. S. R. Zeebaree, K. Jacksi, and R. R. Zebari, “Impact analysis of SYN flood DDoS attack on HAProxy and NLB cluster-based web servers,” Indonesian Journal of Electrical Engineering and Computer Science, vol. 19, no. 1, pp. 510–517, 2020.
  3. S. R. Zeebaree, R. R. Zebari, and K. Jacksi, “Performance analysis of IIS10.0 and Apache2 Cluster-based Web Servers under SYN DDoS Attack,” TEST Engineering & Management, vol. 83, no. March-April 2020, pp. 5854–5863, 2020.
  4. H. Shukur, S. Zeebaree, R. Zebari, O. Ahmed, L. Haji, and D. Abdulqader, “Cache Coherence Protocols in Distributed Systems,” Journal of Applied Science and Technology Trends, vol. 1, no. 3, pp. 92–97, 2020.
  5. L. M. Haji, S. R. Zeebaree, O. M. Ahmed, A. B. Sallow, K. Jacksi, and R. R. Zeabri, “Dynamic Resource Allocation for Distributed Systems and Cloud Computing,” TEST Engineering & Management, vol. 83, no. May/June 2020, pp. 22417–22426, 2020.
  6. H. Shukur, S. Zeebaree, R. Zebari, D. Zeebaree, O. Ahmed, and A. Salih, “Cloud Computing Virtualization of Resources Allocation for Distributed Systems,” Journal of Applied Science and Technology Trends, vol. 1, no. 3, pp. 98–105, 2020.
  7. H. I. Dino, S. R. Zeebaree, O. M. Ahmad, H. M. Shukur, R. R. Zebari, and L. M. Haji, “Impact of Load Sharing on Performance of Distributed Systems Computations.” International Journal of Multidisciplinary Research and Publications (IJMRAP), VOl. 3 no.1, pp. 30-37. 2020.
  8. S. R. M. Zeebaree, H. M. Shukur, L. M. Haji, R. R. Zebari, K. Jacksi, and S. M.Abas, “Characteristics and Analysis of Hadoop Distributed Systems,” Technology Reports of Kansai University, vol. 62, no. 4, pp. 1555–1564, Apr. 2020.
  9. C. Xie, R. Chen, H. Guan, B. Zang, and H. Chen, “Sync or async: Time to fuse for distributed graph-parallel computation,” ACM SIGPLAN Notices, vol. 50, no. 8, pp. 194–204, 2015.
  10. M. Abadi et al., “Tensorflow: A system for large-scale machine learning,” in 12th ${$USENIX$}$ symposium on operating systems design and implementation (${$OSDI$}$ 16), 2016, pp. 265–283.
  11. J. Dean et al., “Large scale distributed deep networks,” Advances in neural information processing systems, vol. 25, pp. 1223–1231, 2012.
  12. K. B. Obaid, S. R. Zeebaree, and O. M. Ahmed, “Deep Learning Models Based on Image Classification: A Review,” International Journal of Science and Business, vol. 4, no. 11, pp. 75–81, 2020.
  13. I. Goodfellow, Y. Bengio, A. Courville, and Y. Bengio, Deep learning, vol. 1. MIT press Cambridge, 2016.
  14. M. R. Mahmood, M. B. Abdulrazzaq, S. R. Zeebaree, A. K. Ibrahim, R. R. Zebari, and H. I. Dino, “Classification techniques’ performance evaluation for facial expression recognition.” Indonesian Journal of Electrical Engineering and Computer Science, vol. 21 no.2, pp.176~1184. 2021.
  15. S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997.
  16. R. Zebari, A. Abdulazeez, D. Zeebaree, D. Zebari, and J. Saeed, “A Comprehensive Review of Dimensionality Reduction Techniques for Feature Selection and Feature Extraction,” Journal of Applied Science and Technology Trends, vol. 1, no. 2, Art. no. 2, May 2020, doi: 10.38094/jastt1224.
  17. G. Malewicz et al., “Pregel: a system for large-scale graph processing,” in Proceedings of the 2010 ACM SIGMOD International Conference on Management of data, 2010, pp. 135–146.
  18. H. Dino et al., “Facial Expression Recognition based on Hybrid Feature Extraction Techniques with Different Classifiers,” TEST Engineering & Management, vol. 83, pp. 22319–22329, 2020.
  19. S. R. Zeebaree, A. B. Sallow, B. K. Hussan, and S. M. Ali, “Design and Simulation of High-Speed Parallel/Sequential Simplified DES Code Breaking Based on FPGA,” in 2019 International Conference on Advanced Science and Engineering (ICOASE), 2019, pp. 76–81.
  20. Z. N. Rashid, S. R. Zeebaree, and A. Shengul, “Design and Analysis of Proposed Remote Controlling Distributed Parallel Computing System Over the Cloud,” in 2019 International Conference on Advanced Science and Engineering (ICOASE), 2019, pp. 118–123.
  21. G. D’Angelo and M. Marzolla, “New trends in parallel and distributed simulation: From many-cores to cloud computing,” Simulation Modelling Practice and Theory, vol. 49, pp. 320–335, 2014.
  22. P. Y. Abdullah, S. R. M. Zeebaree, H. M. Shukur, and K. Jacksi, “HRM System using Cloud Computing for Small and Medium Enterprises (SMEs),” Technology Reports of Kansai University, vol. 62, no. 04, Art. no. 04, Apr. 2020.
  23. A. M. Law, W. D. Kelton, and W. D. Kelton, Simulation modeling and analysis, vol. 3. McGraw-Hill New York, 2000.
  24. M. F. Richard, “Parallel and Distribution Simulation Systems,” 1999.
  25. R. Tyagi and S. K. Gupta, “A survey on scheduling algorithms for parallel and distributed systems,” in Silicon Photonics & High Performance Computing, Springer, 2018, pp. 51–64.
  26. S. J. Kim, “A general approach to mapping of parallel computations upon multiprocessor architectures,” in Proc. International Conference on Parallel Processing, 1988, vol. 3.
  27. Y. Xu, K. Li, L. He, and T. K. Truong, “A DAG scheduling scheme on heterogeneous computing systems using double molecular structure-based chemical reaction optimization,” Journal of Parallel and Distributed Computing, vol. 73, no. 9, pp. 1306–1322, 2013.
  28. Z. Tong, Z. Xiao, K. Li, and K. Li, “Proactive scheduling in distributed computing—A reinforcement learning approach,” Journal of Parallel and Distributed Computing, vol. 74, no. 7, pp. 2662–2672, 2014.
  29. S. Brin and L. Page, “Reprint of: The anatomy of a large-scale hypertextual web search engine,” Computer networks, vol. 56, no. 18, pp. 3825–3833, 2012.
  30. J. E. Gonzalez, Y. Low, C. E. Guestrin, and D. O’Hallaron, “Distributed parallel inference on large factor graphs,” arXiv preprint arXiv:1205.2645, 2012.
  31. J. Gonzalez, Y. Low, A. Gretton, and C. Guestrin, “Parallel gibbs sampling: From colored fields to thin junction trees,” in Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 2011, pp. 324–332.
  32. S. J. Chapin, “Distributed and multiprocessor scheduling,” ACM Computing Surveys (CSUR), vol. 28, no. 1, pp. 233–235, 1996.
  33. Y. S. Jghef and S. R. Zeebaree, “State of Art Survey for Significant Relations between Cloud Computing and Distributed Computing,” International Journal of Science and Business, vol. 4, no. 12, pp. 53–61, 2020.
  34. B. Shirazi, A. Hurson, and K. Kavi, “Introduction to scheduling and load balancing,” IEEE Computer Society, 1995.
  35. L. Lamport, “Time, clocks, and the ordering of events in a distributed system,” in Concurrency: the Works of Leslie Lamport, 2019, pp. 179–196.
  36. H. Nakada et al., “Design and implementation of a local scheduling system with advance reservation for co-allocation on the grid,” in The Sixth IEEE International Conference on Computer and Information Technology (CIT’06), 2006, pp. 65–65.
  37. Y. Li, Y. Liu, L. Li, and P. Luo, “Local scheduling scheme for opportunistic routing,” in 2009 IEEE Wireless Communications and Networking Conference, 2009, pp. 1–6.
  38. J. Regehr, “Some guidelines for proportional share CPU scheduling in general-purpose operating systems,” 2001.
  39. C. A. Waldspurger and E. Weihl W, “Stride scheduling: deterministic proportional-share resource management,” 1995.
  40. R. Koshy, “Scheduling in distributed system: a survey and future perspective,” Int J Adv Technol Eng Sci, 2014.
  41. A. Gupta, A. Tucker, and S. Urushibara, “The impact of operating system scheduling policies and synchronization methods of performance of parallel applications,” in Proceedings of the 1991 ACM SIGMETRICS conference on Measurement and modeling of computer systems, 1991, pp. 120–132.
  42. E. Frachtenberg, G. Feitelson, F. Petrini, and J. Fernandez, “Adaptive parallel job scheduling with flexible coscheduling,” IEEE Transactions on Parallel and Distributed systems, vol. 16, no. 11, pp. 1066–1077, 2005.
  43. G. S. Choi, "Co-ordinated coscheduling in time-sharing clusters through a generic framework," in Cluster Computing, 2003. Proceedings. 2003 IEEE International Conference on, 2003, pp. 84-91.
  44. Y. Zhang, H. Franke, J. Moreira, and A. Sivasubramaniam, "Improving parallel job scheduling by combining gang scheduling and backfilling techniques," in Parallel and Distributed Processing Symposium, 2000. IPDPS 2000. Proceedings. 14th International, 2000, pp. 133-142.
  45. P. G. Sobalvarro, S. Pakin, W. E. Weihl, and A. A. Chien, "Dynamic coscheduling on workstation clusters," in Workshop on Job Scheduling Strategies for Parallel Processing, 1998, pp. 231-256.
  46. G. D'Angelo and M. Bracuto, "Distributed simulation of large-scale and detailed models," International Journal of Simulation and Process Modelling, vol. 5, pp. 120-131, 2009
  47. Z. Mohtajollah and F. Adibnia, “A Novel Parallel Jobs Scheduling Algorithm in The Cloud Computing,” in 2019 9th International Conference on Computer and Knowledge Engineering (ICCKE), 2019, pp. 243–248.
  48. M. Xu, C. Q. Wu, A. Hou, and Y. Wang, “Intelligent scheduling for parallel jobs in big data processing systems,” in 2019 International Conference on Computing, Networking and Communications (ICNC), 2019, pp. 22–28.
  49. V. A. Lepakshi and C. S. R. Prashanth, “Efficient Resource Allocation with Score for Reliable Task Scheduling in Cloud Computing Systems,” in 2020 2nd International Conference on Innovative Mechanisms for Industry Applications (ICIMIA), 2020, pp. 6–12.

Downloads

Download data is not yet available.

Similar Articles

1-10 of 37

You may also start an advanced similarity search for this article.

Most read articles by the same author(s)